Today we've got a bit of a treat. I've been holding off on this for a while, because I wanted to do it justice. This isn't the typical wankish crackpottery, but rather a deep and interesting bit of crackpottery. A reader sent me a link to a website of a mathematics professor, N. J. Wildberger, at the University of New South Wales, which contains a long, elegant screed against the evils of set theory, titled "Set Theory: Should You Believe?"
It's an interesting article - and I don't mean that sarcastically. It's over the top, to the point of extreme silliness in places, but the basic idea of it is not entirely unreasonable. Back at the beginnings of set theory, when Cantor was first establishing his ideas, there was a lot of opposition to it. In
my opinion, the most interesting and credible critics of set theory were the constructivists. Put briefly, constructivists believe that all valid math is based on constructing things. If something exists, you
can show a concrete instance of it. If you can describe it, but you can't build it, then
it's just an artifact of logic.
Some of that opposition continues to this day, and it's not just the domain of nuts. There are
serious mathematicians who've questioned the meaningfulness of some of the artifacts of modern
set-theory based mathematics. Just to give one prominent example, Greg Chaitin has given lectures in which he discusses the idea that the real numbers aren't real: they're just logical artifacts which can never actually occur in the real world, and rationals are the only real real numbers. (I don't think that Greg really believes that - just that he thinks it's an interesting idea to consider. He's far
too entranced with set theory. But he clearly considers it valid enough to be worth thinking about
and talking about.)
Professor Wildberger is very much in the constructivist school of thought. That doesn't mean
that he's right, and in fact, I think he makes some dreadful errors in his article. But it's
worth the look. His basic point is a good one: that the way we teach math really stinks. His
explanation of why it stinks is, I think, pretty far off base. Here's the gist of
his fundamental thesis:
Modern mathematics doesn't make complete sense. The unfortunate consequences include difficulty in deciding what to teach and how to teach it, many papers that are logically flawed, the challenge of recruiting young people to the subject, and an unfortunate teetering on the brink of irrelevance.
If mathematics made complete sense it would be a lot easier to teach, and a lot easier to learn. Using flawed and ambiguous concepts, hiding confusions and circular reasoning, pulling theorems out of thin air to be justified `later' (i.e. never) and relying on appeals to authority don't help young people, they make things more difficult for them.
If mathematics made complete sense there would be higher standards of rigour, with fewer but better books and papers published. That might make it easier for ordinary researchers to be confident of a small but meaningful contribution. If mathematics made complete sense then the physicists wouldn't have to thrash around quite so wildly for the right mathematical theories for quantum field theory and string theory. Mathematics that makes complete sense tends to parallel the real world and be highly relevant to it, while mathematics that doesn't make complete sense rarely ever hits the nail right on the head, although it can still be very useful.
So where exactly are the logical problems? The troubles stem from the consistent refusal by the Academy to get serious about the foundational aspects of the subject, and are augmented by the twentieth centuries' whole hearted and largely uncritical embrace of Set Theory.
Most of the problems with the foundational aspects arise from mathematicians' erroneous belief that they properly understand the content of public school and high school mathematics, and that further clarification and codification is largely unnecessary. Most (but not all) of the difficulties of Set Theory arise from the insistence that there exist `infinite sets', and that it is the job of mathematics to study them and use them.
The problem with what comes later is apparent from just this much. Professor Wildberger believes
that we're starting from the wrong foundations for math - and that the problems of mathematical
education come not just from poor teaching, but from the fact that the teachers are building on
a flawed foundation.
I think he's pretty off-base with this. I remember my high school math teachers. Most of them didn't know diddly-squat about set theory. They didn't know or care about the axioms of set theory, or the idea of infinite sets. They didn't know how real numbers are formally defined. It didn't matter to them,
or to the material they were teaching. That fact is a big part of Wildberger's point: that teachers at that level leave out, or don't understand foundations. But that's not why the lousy teachers were lousy teachers. The great math teachers that I had also didn't know any of the deep foundations. They didn't need to.
Teaching a subject like math doesn't really start from foundations. In fact, teaching any really complex subject doesn't start from foundations. Foundations are left until after you understand a bit. My fellow SBer Chad Orzel has said that part of teaching introductory subjects is lying to your students. You start by telling them about approximations to the truth, which you know are wrong in the details. In physics, you start off with basic newtonian physics, immutable masses. You don't start off with relativity and quantum phenomenon - the students simply don't yet have the intellectual tools to deal with that. You start off with the simple approximation, and then teach from there; then once the students have some grasp on the basics, you move on to start introducing the subtleties. And even there, you continue with your white lies. Again in physics, you move on to teach relativity - but you don't start with the problems of the incompatibility of quantum physics with relativity; you start by just teach relativity. Once the students grasp that, then you can make the next step towards the foundation, and talk about the interactions between relativity and quantum phenomena.
We do the same thing with language. We don't start teaching a student a language by presenting them with the foundational grammar of the language. We start by lying. We tell them that "a sentence looks like this", and teaching them to use that basic structure with some basic vocabulary. Then we move on to show
them how the structure can vary, by introducing new clauses, tenses, senses, voices, etc.
So I disagree with him right from the beginning. But I admit that his idea has some validity; I don't
think he's right, but I think that the argument can be made by a reasonable person, and that's entirely
possible that he's right and I'm wrong.
What happens next is where he starts to go off the rails. He's clearly a strict constructivist, and
as a result, has a deep, visceral dislike of set theory, and all things related to it. The problem is that
he takes his hate of set theory to such a point that he basically argues for discarding logic in math.
Skipping past his diatribe about how most math professors don't understand fundamental math (which I find
to be a rather shallow bit of hand-waving snobbery), we get to the meat of his argument about set
theory:
But there are two foundational topics that are introduced in the early undergraduate
years: infinite set theory and real numbers. Historically these are very controversial topics, fraught
with logical difficulties which embroiled mathematicians for decades. The presentation these days is
matter of fact---`an infinite set is a collection of mathematical objects which isn't finite' and `a real
number is an equivalence class of Cauchy sequences of rational numbers'.Or some such nonsense. Set theory as presented to young people simply doesn't make sense, and the
resultant approach to real numbers is in fact a joke! You heard it correctly---and I will try to explain
shortly. The point here is that these logically dubious topics are slipped into the curriculum in an
off-hand way when students are already overworked and awed by all the other material before them. There is
not the time to ruminate and discuss the uncertainties of generations gone by. With a slick enough
presentation, the whole thing goes down just like any other of the subjects they are struggling to learn.
From then on till their retirement years, mathematicians have a busy schedule ahead of them, ensuring that
few get around to critically examining the subject matter of their student days.
It all comes back to those darned nasty infinite sets!
Most of math embroiled mathematicians with logical difficulties. The diatribe that I skipped over
included a rant about polynomials: finding roots of polynomials caused problems in math - from the
arithmetic difficulties, to logical and philosophical difficulties - and those arguments raged for
centuries. Professor Wildberger clearly doesn't have any problem with the idea of
polynomial roots, nor, I suspect does he have problems with complex numbers, which resolved
that problem - but which met with profound opposition from the precursors of the constructivists: "If your imaginary numbers are real, show me 3+2i apples!"
This really is the heart of his argument. As we'll see, his fundamental problem is the idea of
the infinite set. He does not believe that infinite sets exist in any meaningful way - because
you can't build one. And since set theory is so dependent on the idea of infinite sets, it must
be completely bogus. And since modern math is taught using set theory as its fundamental
basis, that means that everything that math students are taught it wrong: mathematicians
do not understand math, because what they think they understand is based on a fundamental
falsehood.
I think we can agree that (finite) set theory is understandable. There are many examples
of (finite) sets, we know how to manipulate them effectively, and the theory is useful and powerful
(although not as useful and powerful as it should be, but that's a different story).So what about an `infinite set'? Well, to begin with, you should say precisely what the term means.
Okay, if you don't, at least someone should. Putting an adjective in front of a noun does not in itself
make a mathematical concept. Cantor declared that an `infinite set' is a set which is not finite. Surely
that is unsatisfactory, as Cantor no doubt suspected himself. It's like declaring that an `all-seeing
Leprechaun' is a Leprechaun which can see everything. Or an `unstoppable mouse' is a mouse which cannot be
stopped. These grammatical constructions do not create concepts, except perhaps in a literary or poetic
sense. It is not clear that there are any sets that are not finite, just as it is not clear that there are
any Leprechauns which can see everything, or that there are mice that cannot be stopped. Certainly in
science there is no reason to suppose that `infinite sets' exist. Are there an infinite number of quarks
or electrons in the universe? If physicists had to hazard a guess, I am confident the majority would say:
No. But even if there were an infinite number of electrons, it is unreasonable to suppose that you can get
an infinite number of them all together as a single `data object'.
See what I mean? The infinite sets are the problem. You can't create an infinite set - they only exist
as a result of logic. They're not real. And since they're not real, you can't trust anything
that you can prove from them. He goes on to try to continue to build his case, and cast more doubt
on the concept:
The dubious nature of Cantor's definition was spectacularly demonstrated by the contradictions in
`infinite set theory' discovered by Russell and others around the turn of the twentieth century. Allowing
any old `infinite set' Ã la Cantor allows you to consider the `infinite set' of `all infinite sets', and
this leads to a self-referential contradiction. How about the `infinite sets' of `all finite sets', or
`all finite groups', or perhaps `all topological spaces which are homeomorphic to the sphere'? The
paradoxes showed that unless you are very particular about the exact meaning of the concept of `infinite
set', the theory collapses. Russell and Whitehead spent decades trying to formulate a clear and
sufficiently comprehensive framework for the subject.Let me remind you that mathematical theories are not in the habit of collapsing. We do not routinely say, "Did you hear that Pseudo-convex cohomology theory collapsed last week? What a shame! Such nice people too."
This is where he starts to get sloppy. Yes, if you're not very particular about your definition of
sets, and in particular, infinite sets, you can run into trouble. But as Gödel showed, you don't need
sets to get into trouble. Any time in math, if you're not very careful, you can get into trouble. Pick one bogus axiom, and everything you've built from it collapses like a house of cards. Set theory isn't
unique in this way. If you're not careful when you pick your axioms of geometry, you might get the result
that there's no such thing as a parallel line. If you're not careful when you pick your axioms describing
numbers, and you might end up with x+y != y+x.
But Professor Wildberger wants to focus on set theory alone, and claim that this problem
is uniquely one of infinite sets. He can't show that there's anything wrong with it, other
that repeating the same old constructivist arguments: "you can't create a collection of an infinite number of things, so infinite sets don't exist". But that's an old, mostly discredited argument. So
the professor needs to try to build it up, to show that it's been improperly discredited. So, sadly, he pulls out an old creationist trick: the appeal to authority, and almost a
quote mine: "Look - Russell and Whitehead showed that it's a mess!" Now, R&W didn't
conclude that set theory was a hopeless mess. They just accepted the sad fact that
Gödel had showed that their attempt to create the complete, perfect mathematics was
doomed to failure. That doesn't mean that there's a problem with set theory; just that there's
a limit to formal reasoning. That's not a surprise to any modern mathematician (or even just
a math geek like me). We've grown up with Gödel's limits.
showed "
But Professor Wildberger thinks it's far more important than that:
So did analysts retreat from Cantor's theory in embarrassment? Only for a few years, till Hilbert rallied the troops with his battle-cry "No one shall expel us from the paradise Cantor has created for us!" To which Wittgenstein responded "If one person can see it as a paradise for mathematicians, why should not another see it as a joke?"
Do modern texts on set theory bend over backwards to say precisely what is and what is not an infinite set? Check it out for yourself---I cannot say that I have found much evidence of such an attitude, and I have looked. Do those students learning `infinite set theory' for the first time wade through The Principia? Of course not, that would be too much work for them and their teachers, and would dull that pleasant sense of superiority they feel from having finally `understood the infinite'.
Again, he quotemines, to try to create an impression that great minds agree with him.
And then, it gets sad. Set theory, like any mathematical theory, is built on a set of axioms.
Since the axioms work, and allow proofs of things he doesn't like, Wildberger concludes that the
axiomatic approach is totally, fatally flawed, and goes on a rant about the evils of axiom. Axioms aren't math, they're religion! We need to get rid of those stinking rotten axioms that are causing all of
the grief!
The bulwark against such criticisms, we are told, is having the appropriate collection of `Axioms'! It turns out, completely against the insights and deepest intuitions of the greatest mathematicians over thousands of years, that it all comes down to what you believe. Fortunately what we as good modern mathematicians believe has now been encoded and deeply entrenched in the `Axioms of Zermelo--Fraenkel'. Although there was quite a bit of squabbling about this in the early decades of the last century, nowadays there are only a few skeptics. We mostly attend the same church, dutifully repeat the same incantations, and insure our students do the same.
This leads into a presentation of the axioms of set theory, presented in a way that tries to make
them look as silly as possible - so that he can then follow it with a critique of how silly
they are. The problem is, it's all silliness.
All completely clear? This sorry list of assertions is, according to the majority of mathematicians, the proper foundation for set theory and modern mathematics! Incredible!
See, we're supposed to be very impressed that the axioms, stated blankly, without explanation,
do not form an obvious basis for math. They're not easy to really understand, so they must
be ridiculous. And they've got the horrible "Axiom of infinity"!
The `Axioms' are first of all unintelligible unless you are already a trained mathematician. Perhaps you disagree? Then I suggest an experiment---inflict this list on a random sample of educated non-mathematicians and see if they buy---or even understand---any of it. However even to a mathematician it should be obvious that these statements are awash with difficulties. What is a property? What is a parameter? What is a function? What is a family of sets? Where is the explanation of what all the symbols mean, if indeed they have any meaning? How many further assumptions are hidden behind the syntax and logical conventions assumed by these postulates?
Set theory doesn't exist on its own. Wildberger is trying to suggest that it does. But it really doesn't. Set theory works in conjuction with first order predicate logic. You can build all
of math using those two - but take away sets, and the logic doesn't have a model; take away the logic,
and you can't reason about sets. What Wildberger is doing is pretending that the places where set theory
depends on logic are faults in set theory. The axioms are statements of predicate logic (with the
exception of the axiom of subsets, which is actually an axiom schema - a second-order statement- parametric on a predicate.) The rest of it is all built on logic. But Wildberger pretends that it
isn't - that the fact that "property", "function", "parameter" are all ill-defined terms, because
they aren't defined in set theory. But they are defined in predicate logic - the necessary companion of set theory.
And then we get back to his favorite hobby-horse:
And Axiom 6: There is an infinite set!? How in heavens did this one sneak in here? One of the whole points of Russell's critique is that one must be extremely careful about what the words `infinite set' denote. One might as well declare that: There is an all-seeing Leprechaun! or There is an unstoppable mouse!
Oh, the horror! The infinite set! We can't have that! And again, we see the pseudo-quote mine of
Russell - the implication that Russell said that you can't have infinite sets. But Russell didn't
say that. And Wildberger uses a form of the axiom of infinity that I've never seen before. The way that Wildberger presents it is "There exists an infinite set". The way that it's normally presented is
"∃N: ∅∈N ∧ (∀x : x∈N ⇒ x∪{x}∈N)". In other words, the
usual presentation of the axiom of infinity doesn't just say that there's an infinite set; it defines
a specific infinite set with specific properties according to the other axioms. It's downright
constructivist, in fact.
There's a reason that Wildberger chose to present it in that dreadful way: because if he presented it in its standard form, then his rant following it wouldn't have worked. He's trying to say that the axioms are invalid, because they don't specify what an "infinite set" really means, and that since Russell showed that unless you're very careful of how you define the concept, that infinite sets cause trouble. But the real axioms do carefully define the infinite set in a way that works - and that shows us how
to create the set of natural numbers.
Wildberger clearly understands this problem - which leads into the next section of his article, in which he takes on the entire concept of axioms in mathematics.
Occasionally logicians inquire as to whether the current `Axioms' need to be changed further, or augmented. The more fundamental question---whether mathematics requires any Axioms ---is not up for discussion. That would be like trying to get the high priests on the island of Okineyab to consider not whether the Divine Ompah's Holy Phoenix has twelve or thirteen colours in her tail (a fascinating question on which entire tomes have been written), but rather whether the Divine Ompah exists at all. Ask that question, and icy stares are what you have to expect, then it's off to the dungeons, mate, for a bit of retraining.
Mathematics does not require `Axioms'. The job of a pure mathematician is not to build some elaborate castle in the sky, and to proclaim that it stands up on the strength of some arbitrarily chosen assumptions. The job is to investigate the mathematical reality of the world in which we live. For this, no assumptions are necessary. Careful observation is necessary, clear definitions are necessary, and correct use of language and logic are necessary. But at no point does one need to start invoking the existence of objects or procedures that we cannot see, specify, or implement.
This is where the constructivism because bleedingly obvious, and where we see how he misuses the
word "axiom" to build his case. Logic is fine to him - so long as it is never used to describe something that we cannot create a concrete physical instantiation of.
He wants to replace axioms by definitions, and as near as I can tell, the difference between
axioms and definitions is that definitions are, according to him, absolutely concrete. Things
like the axiom of infinity, which describe something in terms of logic which you can't build
out of matter, are unacceptable: they're axioms, not definitions. In his own words:
The difficulty with the current reliance on `Axioms' arises from a grammatical confusion, along with the perceived need to have some (any) way to continue certain ambiguous practices that analysts historically have liked to make. People use the term `Axiom' when often they really mean definition. Thus the `axioms' of group theory are in fact just definitions. We say exactly what we mean by a group, that's all. There are no assumptions anywhere. At no point do we or should we say, `Now that we have defined an abstract group, let's assume they exist'. Either we can demonstrate they exist by constructing some, or the theory becomes vacuous. Similarly there is no need for `Axioms of Field Theory', or `Axioms of Set theory', or `Axioms' for any other branch of mathematics---or for mathematics itself!
I disagree with his characterization. In fact, I think this is where he stops being a reasonable
constructivist, and starts to verge on crackpottery. There are no axioms of field theory? No axioms of
group theory? How does that work? I really would like to know just how Professor Wildberger does group
theory without working with the set of real numbers - since that's clearly infinite, and many of the
theorems of group theory require the axiom of choice!
This is nothing but linguistic game-playing: exactly what he accuses his opponents of doing. He's playing with definitions and terms in a vague and ambiguous way in order to create a false distinction
between the valid logical statements that he likes ("definitions"), and the valid logical statements that he doesn't like (axioms). The only real difference between the two is his personal opinion of them. Clearly, as someone who does work in group and field theory, he's using axioms that work on infinite sets, like the set of numbers. But he just declares them to be "definitions" of concrete things, and
since they're not dirty, nasty, evil "axioms", they're OK.
At least he gets honest about admitting why he does this:
We have politely swallowed the standard gobble dee gook of modern set theory from our student days---around the same time that we agreed that there most certainly are a whole host of `uncomputable real numbers', even if you or I will never get to meet one, and yes, there no doubt is a non-measurable function, despite the fact that no one can tell us what it is, and yes, there surely are non-separable Hilbert spaces, only we can't specify them all that well, and it surely is possible to dissect a solid unit ball into five pieces, and rearrange them to form a solid ball of radius two.
And yes, all right, the Continuum hypothesis doesn't really need to be true or false, but is allowed to hover in some no-man's land, falling one way or the other depending on what you believe. Cohen's proof of the independence of the Continuum hypothesis from the `Axioms' should have been the long overdue wake-up call. In ordinary mathematics, statements are either true, false, or they don't make sense. If you have an elaborate theory of `hierarchies upon hierarchies of infinite sets', in which you cannot even in principle decide whether there is anything between the first and second `infinity' on your list, then it's time to admit that you are no longer doing mathematics.
Yes, infinite set theory creates some strange results. The Banach-Tarski paradox (the thing about the balls that he mentions is a B-T variant) is definitely strange and uncomfortable. That I agree with. B-T bothers me. It's a nasty beast, which I find it damned hard to wrap my head around. But I don't reject things just because they're hard.
Where he gets silly is the "uncomputable numbers" bit. That's where he's starting to touch on my
specialty. And sorry, prof, but the uncomputable numbers are absolutely real. The square root of two is
(weakly) non-computable. (By weakly non-computable, I mean that we can compute an approximation of it to any degree of accuracy that we want, but we can never computationally produce the exact number.) ω is well-defined, and thoroughly (strongly) non-computable (although you
can compute the first couple of dozen digits of it). They're real, they're a fact of life in
computing. Our comfort or discomfort means nothing. Quantum physics makes me damned uncomfortable too, but I can't argue that it must be wrong because it makes me queasy.
The part about the Continuum hypothesis is, to me, even sillier. It's just more of the extreme
constructivist viewpoint, but carried to an extreme that I thought died with the failure of Russell and
Whitehead. Before Gödel and incompleteness, there was a dream that math could be solved,
that we could create a perfect mathematics in which every possible statement was provably true or false,
and where there were no contradictions. Wildberger seems to still be a part of that school, to believe in
that dream. But it doesn't work. Math has limits: it can't be both consistent and complete. There are, inevitably, statements that aren't provable. The Continuum hypothesis is one of those strange places. Personally, I don't even see it as a problem. Euclid defined the axioms of geometry. He added the
parallel axiom, even though he didn't think it should be necessary. It turns out that without it,
you can get different results. You can't get to standard Euclidian geometry without defining the answer to "Given a point p and a line l, How many lines parallel to l pass through p?" as being 1. There's no reason that it has to be. You get different geometries if you pick different values. But that's fine! The different geometries are interesting, and have interesting applications. With
respect to the continuum, it's the same story: there's no reason that it has to be either true or false. You get a different structure to your universe of sets depending on which value you choose.
Anyway, he continues in this vein for quite a while longer. He also takes some time to attack the idea
of the natural numbers as an infinite set, and the concept of the real numbers - but they're just more of the same kind of thing that we've already seen in the above quoted sections: extreme constructivism,
combined with an intense revulsion for the concept of infinite sets.
- Log in to post comments
I seem to remember a Heinlein novel (can't remember which one just now) in which he was ranting about modern math, in particular modern set theory - and one of the characters had sat down and "put mathematics on a solid base again", presumably without those nasty infinite sets of different cardinality and so forth. In any case, Wildberger doesn't seem to be alone in this one.
One thing that I wonder about constructionists who deny the existence of real numbers: to they believe in circles? If you have a perfect circle, you have pi, and if you have pi, you have an irrational real number. Not to mention squares and sqrt(2)...
Never mind the reals. The reals are a red herring. What about the integers!?! If he wants to throw out the reals, let him! I think it's stupid, but there is a halfway reasonable argument for it. Throwing out the set of integers is just bonkers, though.
So, if there are no infinite sets, does that mean that there is a greatest integer?
Overall, Wildberger's a really interesting guy. He combines the passions of a crank with a serious mathematics education. For example, check out his "rational trigonometry", in which he redefines lots of things to try to make trigonometry simpler. It is all mathematically quite correct, and in fact it does simplify some things, but not by much or in fundamentally new ways (and some of his comparisons are highly exaggerated).
I rarely agree with Wildberger, and I'd worry about myself if I started regularly agreeing with him. On the other hand, he can be pretty entertaining.
Wow. I'm in my first Abstract Algebra class right now and found a lot of this quite informative.
Also, not that this would excuse him, but I don't think we've studied much about axioms of group theory. However, we have looked at definitions of "group" and different types of groups.
It seems as if all irrational numbers are at least your "weakly non-computable". I think there is a real distinction that is useful to be made between numbers that have an algorithm to approximate them as closely as desired, and those that do not, and the label "weakly non-computable" obscures that.
I think that having the vast majority of reals be uncomputable is decidedly odd. As a set, they must exist, given the standard axioms. At the same time, it's rather difficult to point to an example of one.
If we can't actually use them, do we really need them? Can we instead construct an alternative definition of "computable reals" that correspond to what we can construct and work with? Would this be a more or less interesting field to study?
Have you (or anyone you know) read his book, Divine Proportions: Rational Trigonometry to Universal Geometry?
@Aaron:
In fact, one can produce a theory of "computable reals". This was done by Herman Weyl in his book The Continuum. Whether it is useful or not is up to taste (my taste says no, fwiw).
This sort of thinking leads to constructivist theorems like "every function is continuous" (which follows because "functions" are so strongly restricted). Such things can be found in Bishop's Foundations of Constructive Analysis.
@ Capt. Pikachu:
The book avoids some irrationals, by working with distance squared rather than distance, and by working with (what we would call) the sine of an angle squared as the measure of an angle (rather than radians).
As far as that goes, it's all correct, and it would have made a nice Mathematical Monthly article. The book seems to be mathematically correct - he goes on to develop the standard trig stuff on these definitions. Whether it simplifies things particularly is debatable.
What is more suspect is his claims that the book will REVOLUTIONIZE ALL MATHEMATICS and IS A LEAP FORWARD LIKE COPERNICUS' THEORY OF PLANETARY MOTION etc. I may be unfairly paraphrasing here, I don't have the book at hand.
It's odd - authors usually go all cranky or all sane. He walks the line.
He was my first year linear algebra instructor by the way. He was pretty normal then, but that was a few years back.
PS I should qualify my statement that his book is mathematically correct - I only gave it a cursory examination. If he does claim, as you suggest, that he constructs Euclidean geometry without any "axioms", then he is - how do I put it - "wrong".
Here is another quote from his article:
``Neither you nor I nor anyone ever living in this universe will
ever be able to factor this [really huge] number, since most of
its `prime factors' are almost surely so huge as to be
inexpressible, which means they don't exist.''
So Wildberger believes that integers that are too large are
non-existent "fiction". In other words, there exists some integer
M such that any number larger than M is fiction.
He should tell us what M is. Otherwise how can we determine if
the sentences in which we talk about large numbers fiction or
non-fiction?
I grant that it is probably pretty hard to figure out what M is
--- it seems to be one of those funny numbers that must exist,
but we will probably never know for sure what it is, exactly. So
instead he should at least give an actual integer, lets call it
m, such that any number below m is almost certainly
non-fictitious (in his opinion). That way conservative scientists
and mathematicians can be sure if they should try to publish
their next paper in Nature, or in Asimov's Science Fiction.
I'm not going to defend Wildberger, because I suspect his arguments may not be defensible. But I will point out that it is absolutely baseless to attribute worries about infinite sets to any form of constructivism. Constructive mathematics has infinite sets, does not deny the existence of the continuum, nor does it contradict conventional mathematics. Rather, it simply seeks to adopt a form of hygiene in mathematical arguments in which one distinguishes between demonstrated existence and failure of non-existence. There are good reasons for doing this, which we need not discuss here. But be assured that it is perfectly constructive to assert that the set of natural numbers is infinite, that there are noncomputable functions, or that the real numbers exist! Wildberger's complaints, whatever their merits, have nothing to do with constructivism!
Any time someone wants to find The One True Reality, you know it will be trouble, whether it is philosophy, math, science or society.
Sorry, but as usual I have to map this to science to get a familiar grip on these ideas.
And there it is apparent that useful math doesn't need to be well founded as such. For example, AFAIU quantization hasn't yet been formalized in terms of math.
Coming from the other direction onto the subject of math as "'data objects'", I have some sympathy for Max Tegmark's idea of using Occam's razor to first claim that physics fundamentally may be isomorphic to mathematical structures, and then again to claim that it may be equivalent to them and make away of surplus metaphysical baggage. But I doubt it can be correct since dualities means different formal theories describes the same physics.
Mark mentioned the tension between extreme constructivism and computation. Ars Mathematica has a post that just made me aware of this:
It, and Wikipedia, also made me aware that another motivation for extreme constructivism can be to subsume logic under math. Which I suspect is then another problem for it.
For a concrete example, take Wildberger's notion of axioms as definitions. In science we have both definitions and assumptions, and my feeling, wrong or not, is that axioms combines these functions. In any case it is probably much easier to use math methods that maps to science and vice versa than not, so I would stick with the contextual logic at hand. Another more remote correspondence I think exists between the 'logic' of more idealized math and science is the ambition to maximize the information out of a given context.
yagwara, thanks for explaining why Wildberger's book foreword says:
Btw, did you notice that Wildberger founded his own publishing company Wild Egg Books 2005 to give out his book? He claims the ambition is to "establish in the years to come a spare but illustrious line of mathematical texts that break out of the usual mold." Not surprisingly his book is still alone on the company web site.
I'm not saying that the book is necessarily bad, but that we can suspect related problems with publishing, and in any case chalk up another score for "crackpottery" behavior.
"My fellow SBer Chad Orzel has said that part of teaching introductory subjects is lying to your students. You start by telling them about approximations to the truth, which you know are wrong in the details. "
I remember, when I started college, noticing a difference between the way this is handled in high school and in college.
Namely, in high school, teachers lie and they don't tell you they're lying. They don't even mention that, say, Newtonian mechanics isn't exactly correct. In college, though, if the theory that's being taught is just an approximation to reality then they say so, and mumble something about the truth being "beyond the scope of this course".
Now I'm in grad school. Nothing's ever beyond the scope of the course. But that's fine, because it won't be long before I just stop taking courses altogether.
Let's say that the universe is an input/output system, we, humans, observe it with our instruments and sensors which throw out numbers to us measuring some physical magnitude: are we ever going to see a real number as an output? Are we ever going to see pi as an output? Moreover, does the set of all possible outputs of this i/o universe contain the real numbers? Wasn't there a fundamental limit to the precision to which a physical magnitude can be measured?
About the "lying" and teaching part. I prefer the sugar-coated term: making (many) assumptions. Methinks, it'd be better if teachers tell the students that there are certain assumptions made about this problem(and possibly, tell the more important ones) to avoid confusions in the future. Then again, the student's might not care and the teachers might not know the assumptions made.
"Put briefly, constructivists believe that all valid math is based on constructing things. ... Some of that opposition continues to this day, and it's not just the domain of nuts."
You neglect to mention that in some areas it is genuinely useful--type theory and its related subjects being one very good example.
Ok, I haven't had time to read all of this (I've got some work to do, so it will have to wait until later), but I just wanted to suggest one thing about the educational aspect. I agree with what I think he is saying in that Math and all subjects are generally taught chronologically. That is, the way we first understood physics (Newtonian) is taught before thermodynamics which is taught before relativity, which is taught before quantum physics. I'm not saying that set theory (or quantum physics) is wrong, but that it might not be such a bad idea to reword it in such a way that it can be taught first. If it would be impossible to do that, then maybe it shouldn't be a foundation of math simply because it can't be taught first. Even though it might appear more 'primitive' (that is, useful for building/unifying math), if it can't be taught without a higher understanding of math, then it isn't really the foundation, it's just a bridge from one part to another. I feel the same about most other subjects too. Anyway, that might not make sense, but I'm just saying I agree that there are flaws in the way we teach a lot of subjects, and I think it has more to do with traditional curriculums than bad teachers.
One time while reading through a pile of slush manuscripts, I came across one by a fellow who claimed to have proved that pi is really equal to 22/7, a repeating decimal. There were a few pages of mathematical gobbledygook, followed by hundreds of pages consisting of the repeating decimal printed over and over again in tiny, unreadably small blocks.
I know there's a lot of cranks out there, but I couldn't help but wonder why someone would invest that kind of effort in proving a different value for pi. But I suppose it must be related to the ideas that Wildberger is objecting to here, just in a less sensible way.
I like Wildberger's ideas and ranting.
I may not agree with everything he says, but I have thought about some of the arguements. To me, this is what makes math enjoyable. The concept of real numbers, in which the vast majority of those numbers can never be expressed is facinating. You can express the square root of 2 and even describe how to calculate pi, but most real numbers cannot be represented (i.e. constructed). I believe it has been proven that if you pick a number at random from the real numbers the chance that you will pick a constructable number is 0%.
This is tatamount to proving that with first order logic the vast majority of truths can not be proven with first order logic. What good are these truths if you can't prove them? Especially if you can't determine if a truth is one of the non-provable truths. I can't help but feel that I am missing something fundemental in mathematics. The B-T paradox bugs me too.
I'm also interested in Wildberger's interpretation of calculus. That is that an infinitesimal is nothing more than a decay rate (the opposite of a rate of growth). No more delta/epsilon.
Consider Rotman, Brian "Ad Infinitum ... the Ghost in Turing's Machine: Taking God Out of Mathematics and Putting the Body Back In" (Stanford, Stanford University Press: 1993)and his appendix on non-infinitistic arithmetic.
All this ranting about axioms etc. reminds me of the ancient Chinese nominalists who went to great lengths to prove that a white horse is not a horse, and also that there is no such thing as a hard white stone. As you describe Wildberger, his arguments appear to be worth thinking about for the same reason as the nominalists, if at all.
Wow. That article was pretty bizarre. I find myself in agreement with most of your comments.
The main problem I have with constructivism is that it seems to be born of a need to have mathematics "under control", to be able to understand exactly what is true and what is false -- and we know, thanks to Gödel, that this desire cannot be fulfilled. So the whole affair seems a bit pointless.
One point, though:
Well, I don't know about that. It actually commits the constructivists' cardinal sin: it claims the existence of a set satisfying a property without specifying the exact elements of the set. Once we have such sets, we can take the intersection of all such sets, and call the result N, or "the set of natural numbers"; but we still don't know exactly which sets are elements. That's actually a good thing; if the structure of N were fixed by the first-order set theory axioms, we could find a contradiction in ZF just by applying Gödel and a bit of model theory.
I'm not really sure where I fall on the CH question, so I'll leave that alone.
To comment #4:
No, it doesn't. It's perfectly plausible to have a universe which is itself infinite, but whose objects are all finite. In fact, within ZF, it's easy to construct a model of (ZF minus the Axiom of Infinity) in which every set is finite; but the model itself has to have an infinite number of elements. To me it seems to go back to the "potential infinity" vs. "actual infinity" debate; that is, the natural numbers are definitely a "potential infinity" in that there's no way to put a finite bound on them, but whether they form an "actual infinity" is slightly less supported.
To comment #22:
It depends on your notion of "constructible". If you mean something like "the decimal expansion is computable by a Turing machine", then what you say is true; there are only countably many machines, and countable sets have measure zero, so.
There's another sense due to Gödel, who introduced the constructible hierarchy of sets to classical set theory (and proved Choice and GCH relatively consistent to ZF in the process); it's entirely consistent that every real number (every set, even) is constructible in the sense of falling into this hierarchy, which would make the probability 100%. But I suspect you meant the first.
Well, there's one very big thing that you're missing, and it's a very common error. The big unjustified leap that you're making is the assumption that "valid proof" must translate into "proof from some fixed set of first-order sentences in some fixed language". If anything, Gödel showed the opposite.
For example, let's say PA is the set containing Peano's postulates about the natural numbers (specifically the first-order axioms). I can prove that PA is consistent as follows: every element of PA is a sentence which is true on the structure of natural numbers; therefore every logical consequence of those sentences is also true on that structure; contradictions aren't true on any structure; so PA does not imply any contradictions.
This is a perfectly valid argument. If you grant me the first premise (there is a structure of natural numbers and the sentences in PA hold on that structure), then you have to accept the conclusion (PA is consistent). The vast majority of mathematicians, I think including most logicians, would accept this first premise.
Moreover, I can translate sentences and proofs and so forth into natural numbers, which makes "PA is consistent" naturally equivalent to a first-order statement S in the language of arithmetic, and we can consider S proved.
What Gödel showed was that S is not a consequence of PA alone, so in particular PA does not capture all mathematical truth, and that in general no set of first-order axioms can capture all the truths of mathematics. Sometimes you can prove a statement, but can't translate it into a proof from a given formalization. It's perhaps unfortunate if you were hoping to put all of mathematics on a single, rational footing; but it also means that there's an inexhaustible supply of new axioms -- just keep adding the statement that all the previous statements are consistent!
Banach-Tarski strikes me as kind of weird as well, but I can reconcile it by claiming that "subsets of R^3" can be a more general class than "actual chunks of space", and that in general "chunks of space" have to be measurable sets. Stipulate that and all of the difficulties (that I can see) disappear.
The consensus would be that the universe is continuous, ie maps to real numbers.
First, it is continuous on large scales, so without compelling reason to think otherwise Occam tells us so.
Second, Lorentz invariance is hard to avoid, and it tells us that relative positions (as in relativity theory) transforms continuously under movements. For example, string theory is proposed as fundamental theory and it lives on a continous manifold. (Not in spacetime which is a derived property, but on the string world sheet.)
You are probably thinking of the uncertainty principle. It constrains our ability to measure the underlying physics for some combinations of observables and, coupled to gravity, ultimately of spacetime small dimensions and high energies.
But since quantum mechanics combines descriptions of bound states with free states, we can still measure particular observables on other scales to in principle desirable precision. In practice, the finite lifetime of the universe puts a cap on that, though. Energies won't be perfectly resolved.
A more immediate reason why we can't measure reals is that our measurements are necessarily based on comparison with standards. This gives us rational quotients, and it would be an unbounded limit process to get "infinite" precision.
The real reason for limited precision is that we have noise and measurement errors. On the other hand, that plays right back, as we will be forced to describe those in principle "rational" measurements as probability distributions over real numbers. That is finally all what measurements are.
"But since quantum mechanics combines descriptions of bound states with free states, we can still measure particular observables on other scales to in principle desirable precision."
Sorry, something went missing. I meant to say that besides the problems of discreteness which comes from bound states, we still have continuity in free states.
And the gist of the comment is vague too. In essence, nature is probably isomorphic with some use of reals (and/or complex numbers) in fundamental theories. You can see people claim that we can only measure rationals. But in reality our measurements describes distributions over reals.
First, thanks for writing about this. I came across Wildberger's article a few weeks ago, and found that in reading it I was put in the very uncomfortable position of being interested in and willing to give consideration to his arguments and simultaneously completely turned off by his tone.
I think that the most important thing to remember about mathematics, and the point at which, as you pointed out, his argument ultimately collapses, is that mathematical ideas are just that: ideas. They are a series of metaphors, dreamed up by human beings to explain the world around, and subject to appropriate constraints of human cognition. They are not, unfortunately, a way to magically gain access to the ultimate, underlying structure of existence. They are spectacularly successful at explaining and interpreting the world around us, but only because we are constrained/inspired by our biology and society to come up with apt metaphors.
There is obviously a fascinating interaction between our rational faculties and the way we naturally conceptualize things which gives rise to modern mathematics and which makes it sometimes confusing and difficult to understand. One of the reasons mathematics today is so strange is because we are, for all intents and purposes, free to make up whatever we want, but as a society and a species we tend to make up certain kinds of ideas.
In a similar vein to Wildberger's theory is the general conflict between Platonism and Intuitionism. The article at
http://plato.stanford.edu/entries/philosophy-mathematics/
provides a lot to think about without condemning other mathematicians and without a whiff of crack pottery. I particularly enjoyed the concept that, despite Godel's Incompleteness Theorem, Hilbert's conjecture that every problem of arithmetic can be decided from the axioms of Peano Arithmetic might still be true. However, Predicativism, which is based in the work of Russell, might even be more interesting.
The real reason for limited precision is that we have noise and measurement errors. On the other hand, that plays right back, as we will be forced to describe those in principle "rational" measurements as probability distributions over real numbers.
On that note: I've somehow wound up reading a bunch of stuff by Roger Penrose lately, and something he seems to bring up a lot is the observation that quantum physics as we know it is very inherently bound up with probabilities that must be described by real and/or complex numbers, but "probability", as we have any experiential physical notion of it, is something that only makes sense in terms of rational numbers-- i.e., you have a test that turns out a certain way a times out of b trials, and we call this rational probability a/b.
Penrose usually brings this up so he can try to sell the idea (which he doesn't really seem to completely buy into himself) that quantum theory would make the most sense if we could somehow figure out a way to build it up "combinatorially", such that quantum processes would actually consist of the interactions of a very large number of discrete entities, such that the outcome of those interactions can be described by rational probabilities. The real/complex-number math we use for quantum physics now would just turn out to be an approximation of this combinatorial system.
It almost seems more interesting though to take this the other way-- to look at the disjoint between rational probabilities we experience and real probabilities we use in math and physics, and conclude not that this disjoint means something is "wrong" with the real probabilities in physics-- but just to take it as a reminder that intuition isn't a very good guide and something is wrong, or at least naive, with the intuition that tells us probability is something that comes out of a combinatorial process. Measure theory tells us deeply unintuitive things, but it, not any known combinatorial procedure, is in some physical cases the one that produces the correct answer.
...Of course Torbjörn already basically said all of this in a more succinct manner:
You can see people claim that we can only measure rationals. But in reality our measurements describes distributions over reals.
Isn't an argument that probabilities come down to rationals (a/b for integer a,b) an inherently frequentist definition of probability? Quite often we have to deal with probabilities in contexts where we don't have repeated trials- no real-life situation is ever perfectly reproducible- so you have to go Bayesian, and then the argument about rationals goes away.
If you wish to read a constructive, non-axiomatic Euclidian geometry then try Konstruktive Geometrie by Rüdiger Inhetveen or Elemantargeometrie by Paul Lorenzen. Unfortunately both books are only available in German. For a sensible approach to constructivism in science and mathematics the works of Lorenzen and other members of the Erlanger Schule are to be recommended with the same proviso that they are mostly only available in German. I can personally vouch for Lorenzen's Elementargeometrie as I took part in the seminar that went through it before it was published.
Re: #2
According to people like Wildberger, perfect circles don't exist in the real world. As circles get better and better, that is closer and closer to perfect, the ratio of circumference to diameter approaches π, but the actual ratio is always a finitely computable rational number.
That's also the argument used by the Wildberger sort of constructivist for things like square roots: 2 doesn't have a square root in a truest sense - there are approximations of a square root of two that can get arbitrarily close, but the actual perfect root isn't real.
Re: #33
I can't read German, so I can't specifically address the approach of the text that you name. But there is no way to do geometry without axioms. You've got to start with something which lets you build up your theorems. All of the supposedly non-axiomatic works that I've seen take one of two approaches:
(1) They pull the Wildberger game of differentiating between
"definitions" and "axioms", where the definitions include
some form of the axioms. (For example, you can embed the
parallel axiom into the definition of parallel lines -
but the axiom is still there - you've just renamed it
from "The parallel axiom" to "The parallel definition".
(2) They use an intuitive approach to presenting the axioms.
Instead of just stating the axioms as axioms, they use
examples and intuitions to present them as the result of
intuitive reasoning. I actually like this approach quite
a lot: it's the right way to do it. You shouldn't just
dump the axioms as a bunch of bland, contextless
fundamental statements - you should provide the intuitive
explanation of where they came from and why they make
sense. But when you take this approach, you should still
be honest about the fact that these are axioms.
I did rather get the sense, reading through his page, that you could just s/set theory/imaginary numbers/g, or s/set theory/irrational numbers/g, and nothing else would really change. All the complaints about axioms are just window dressing- he's just hit his limit as to what sort of mathematical object he's willing to get his mind around.
I think that I could write a book about Euclidean geometry that had no explicit axioms, only definitions and theorems. It would begin "The Euclidean plane is the set R^2, equipped with the metric d((x_1,y_1), (x_2,y_2))=((x_1-x_2)^2+(y_1-y_2)^2)^{1/2}." But I can do this because I believe in the real numbers. (In particular, I believe that the sum of two squares always has a square root.) I would be outsourcing all of my axioms to real analysis.
In general, most area of mathematics only use definitions and outsource their axioms elsewhere. To use Wildberger's example, most books on group theory will begin "a group is a SET equipped with operations called multiplication and inversion and an element called the identity such that ..." Logically, no one should be able to understand that sentence without knowing the axioms of set theory but , in fact, it causes no trouble at all.
According to people like Wildberger, perfect circles don't exist in the real world. As circles get better and better, that is closer and closer to perfect, the ratio of circumference to diameter approaches Ï, but the actual ratio is always a finitely computable rational number.
So what is the approximation supposed to approximate? Some near-pi number that is equally impossible to produce?
And if there is a largest integer, doesn't all arithmetic immediately become paraconsistent? That doesn't seem so useful. (there exists M>0 s.t. M+1=M)
I really don't understand constructivist critiques. So we can't produce an infinite set, we can produce a finite formal object that acts as though it were infinite under certain interesting operations.
No one can produce the integer 1, either, only define certain physical forms to be identified with the integer 1.
(oh, Mark, Chaitin's Omega is usually capital greek omega.)
There's a great deal that I could say about this thread, which is about a subject I've studied and pondered deeply for about 50 years, including when I earned my B.S. in Math at Caltech (1968-1973, also a B.S. in English Lit), where I started as a Physics major, but ended by specializing in advanced Mathematical Logic and in Number Theory, and then went on in Grad School to Category Theory (an alternative to Sets as foundation), and then to practical Physics and Computing in the Space Program for 20 years, and as an adjunct professor in Math, and in Astronomy, always leery about how Math and the physical universe connect.
To say just one thing now, let me respond to #27 "... The consensus would be that the universe is continuous, ie maps to real numbers."
Consensus of whom? Below the Planck scale, i.e. approximately 1.6 Ã 10^â35 metres, 6.3 Ã 10^-34 inches, or about 10^-20 times the diameter of a proton, that assumption looks pretty silly. My mentor and coauthor Richard Feynman emphasized in Quantum Electrodynamics that an electron had to be a zero-diameter point, with no spacial extent, but QED, for all its glory, begins to break down at small enough lengths.
The topology of space-time at the smallest lengths is wildly in non-consensus right now. Google "Planck length" and "wormholes" to get started.
Peter Lynds, the controversial young autodidcact in New Zealand has written about why there are no "points" in time, no instants, no "chronons", but only time as a continuum. He's also written recently in arXiv about boundary conditions under which the cosmos has no beginning or end, quite differently from Penrose and Hawking. My coauthor Profesor Fellman (Southern New Hampshire University) and I have published several papers comparing Lynds' theories with those of "the consensus" theorists, and have another one on Quantum Cosmology to present in a couple of weeks at the 7th International Conference on Complex Systems, Boston, 28 Oct-2 Nov 2007.
The issue of foundations is very deep. The axiom of infinity is not suspect in and of itself, but somewhere around Woodin Cardinals and strongly inaccessible cardinals I do find the physicist part of my brain protesting.
Much more could be said about different universes with different logic, different axioms, different physics, from Tegmark's recent work, to Topose Theory for Physics, to Vasiliev's "Imaginary Logic."
Far too much that I want to say, so let me leave it at that for this one posting.
Attempts to boil down mathematics can come to something called the successor function which is at its heart recursive. An article in the The New Yorker magazine recently seems to say that recusion and self reference may not be as basic as we think they are. The article describes a group of Amazon indians who have no concept of number or self reference, here is a link: The Interpreter. What was most interesting to me was that these folks are perfectly conscious and intelligent and can make their way in the world just fine but would most definately be at a total loss when it comes to mathematics. I wonder if first order predicate calculus could be disposed of in a simular manner.
Isabel (#16):
Yep. I remember the refrain "beyond the scope of this course" very well. Sometimes the reason was simple time constraint: the concepts involved weren't intrinsically harder, but they'd take another lecture to explain, and the semester was only so many lectures long. In other cases, the material "beyond the scope of this course" had heftier prerequisites.
Random example:
In Quantum II, we were told that the reason elementary particles are indistinguishable — why, say, all electrons have the same mass and the same charge and can't be told apart — arises naturally from quantum field theory. We weren't told how this fact comes about, just that it does, and for the moment we should just take it as given.
Funnily enough, I'd actually seen the quantum-field-theoretical explanation the semester before, from Barton Zwiebach. So I'm never quite sure the people who decide what the prerequisites are really know what they're talking about.
I wonder if it's possible to define a Riemann integral without having to resort to infinite sets.
As a related problem, there is a bit of a controversy in statistics about the applicability of asymptotic properties of an estimate or hypothesis test. Who cares about the consistency of an estimator (i.e. converges in probability to the right thing) if you don't have an infinite sample size with which to estimate?
I happen to have Euclid's Elements with me here in the office (it was a gift from an old colleague) and this is one of my favourite topics.
A definition and an axiom is well-defined (to a point). To use Euclid as an example, points and lines are definitions and the 1st axiom links the 2 up: that we can draw a straight line from any point to any other point. Axioms allow us to "do stuff" so to speak. Without axioms we have a dictionary and no action.
So Wildberger is right in a sense when he say Group theory has no axioms: if you assume the stuff before it all it does is to add more definitions and voila we have interesting results. Whereas Set theory you have to introduce axioms to have results. Just like you have to introduce axioms in geometry before we have results.
Which brings to the most interesting question in my mind: is Mathematics formal, empirical, or neither? The conclusion I got from my final year philosophy of math project seem to suggest that it is neither and I thought the consensus is that it is quasi-empirical: that without the real world relevance math is junk and we cannot construct from a pure formal/logic/constructivist approach. If you go and look how the ancient Chinese did mathematics they did not put in a system of definitions and axioms. They just started computing and whatever works works. No proofs necessary. As long as one can estimate/predict what the result is (motion of stars, movement of water, etc). Now if you morph that argument into Wildbergers argument it would make for an interesting paper: to hell with the formal approach (that is confusing and doesn't work) and let's just concentrate on finding solutions for real-life problems.
Also if math were easy there would be *more* papers produced rather than less. Hey if it were easy everybody should be able to be involved...
Anyways my two cents.
Mark W in Vancouver
MarkCC wrote:
The Erlanger Schule uses variations on the second approach that you outline but does not use axioms. I will now attempt to explain this.
Its more than twenty years since I have looked at either of the geometry books I mentioned and geometry is not my thing so I can't explain how they build up their constructive geometry. However constructive mathematics without axioms is possible and I will give a sketch of the Erlanger Schule construction of the integers to illustrate what I mean.
Before I do that however a few words on 'definitions', 'axioms', 'postulates', etc. All languages, formal and informal, have primitive undefined terms, which, in the end, all other definitions come back to. The aim of the constructivist is to make those primitive terms as simple and as intuitive as possible and then to build everything else in his language step by step from these primitive terms by definition and construction rules thereby ensuring that all complex statements in his language are valid because they are correctly constructed. Axioms or postulates are complex statements within a formal language that are accepted or postulated as being true when creating a language and are never subjected to a validity test; an unacceptable state of affairs for a constructivist.
Classical arithmetic constructs the integers using the Peano Axioms about which Mark has posted on a number of occasions. The first axiom postulates the existence of either one or zero, axioms two to four define the successor function and axiom five postulates complete mathematical induction. The constructivist has no problem with the successor function, as we shall see, but rejects totally the postulation of one or zero and mathematical induction. The constructivist starts with 'marks' and lists or collections of marks such as III, II, IIIII, IIII and so forth. He then compares 'lengths' of lists by one to one correspondence and defines 'equally long', 'longer' and 'shorter'. The successor function is defined in the usual way 'I' is a mark, if 'n' is a list of marks then 'nI' is also a list of marks and if nI=mI then n=m. The integers are then defined as the equivalence classes of equally long lists; that is 'four' is the class of all lists that have the same length as IIII. Addition is then defined as the juxtaposition of two or more lists. Subtraction is similarly defined as that which is left over when a shorter list is put into one to one correspondence with a longer list. The negative numbers are defined as the result of subtraction of a longer list from a shorter one. Zero is the result of subtracting a list from an equally long one. The statement of mathematical induction is not postulated but proved using constructive logic, a fairly simple proof. Further construction of the rational, irrational and complex numbers is done using number pairs and the Dedekind Cut.
On the subject of infinity constructivists accept what Aristotle called the potential infinite but not an actual or real infinite. That is the integers have no upper bound, for any given integer it is possible to construct the next largest integer from X follows XI but it is not possible to present a closed set of all the integers. Interestingly Euclid in his beautiful proof does not, as many people believe, prove that there are infinitely many prime numbers but that there is no largest prime; a constructivist proof!
The above is only a sketch and of course leaves many question open, e.g. what are equivalence classes etc but I hope that I have made clear what a constructivist means when he says that he uses no axioms.
That was an excellent post Thony C.
I have two questions to which you may or may not know the answer. First, do constuctive recognize real numbers that are NOT also integers, irrationals, algebraic, etc.? In other words real numbers that can not be constructed. Second, how would they interpret the infetesimal in calculus?
David
Constructivists construct rational numbers from integer pairs (m,n) where a rational number is the equivalence class of all integer pairs (m,n) (k,l) such that m*l=k*n where '*' is normal multiplication which has of course been previously defined for the integers. Multiplication is of course just repeated addition. Irrational numbers are defind using Dedekind Cuts as in classical mathematics. A Dedekin Cut is a construction rule and perfectly acceptable to constructivists. Complex numbers are defined for pairs of real numbers in a way that is similar to the construction of rational numbers from the integers.
Constructivists cope with the infinitesimals in calculus in exactly the same way as classical mathematicians, they don't use them. Calculus is done using the delta epsilon limit process as defined by Weierstrass which again is constructive.
Thony C.:
Why are lists viewed to be a better starting point than just saying you have a number called zero?
Thony C.,
Let me start off by first saying that I am not being argumentative in any way. I am honestly trying to get my mind wrapped around a mathematical concept. So your help with this really is appreciated and I am in no way being sarcastic.
So with that said, it is my understanding that not all real numbers can be represented by a finite number of mathematical operations. This is why there can not be an equation for finding the roots of 5th degree and higher polynomials. Assuming that I am correct on this point I have another assumption.
My second assumption is that Dedekind cuts are still based on normal mathematical symbols and operations. So the square root of 2 would be represented by something like this
A={a is element of Q : a square is less than 2, or a is less than or equal 0},
B={b is element of Q : b square is greater than or equal 2, and b greater then 0}
Sorry that I don't have the right fonts, but I hope you get the idea. The point is that the Dedekind cut in this case uses a and b raised to the power of 2 to define the cut. If numbers exist that cannot be described with a finite number of mathematical operations then this means that there are real numbers that can not be represented by Dedekind cuts.
My last assumption was that without ALL the real numbers you could not use the delta epsilon limit process as a foundation for calculus. Thus you had to use infinitesimals or hyperreals or some other unique concept.
Most likely one of my assumptions is off or I am jumping to a wrong conclusion. So where did I go wrong?
Regarding the idea of infinite sets being defined as "not finite" (according to Wildberger), it seems clear that he's deliberately evading the possibility that "finite" has in fact already been defined in a reasonable way.
I don't have much of a background in formal set theory except for a basic understanding of the axioms, so I'm not sure how "finite" and "infinite" are commonly defined, but it seems to me that a very natural way of defining "infinite" is to say that an infinite set is a non-empty set which can be placed in one-to-one correspondence with some proper subset. Then a "finite" set is simply a set that is not infinite. Is this how it's usually done? If not, it's certainly equivalent to the usual definition, and it involves no circularity at all.
The standard definition is that a set is finite if it can be placed in one-to-one correspondence with a proper initial segment of N. A set is infinite if it's not finite.
It actually requires the Axiom of Choice (or some weaker variant thereof) to prove that this is equivalent to the definition you present.
Coin:
Thanks, seems we have much the same view.
Stephen Wells:
Actually, the definition is the limit, so it is the real in the measure.
Btw, wouldn't the same description (of measuring rationals) be applicable to Bayesian probabilities too? Or do you mean that you would circumvent it by adopting priors among the reals?
Jonathan Vos Post:
I think I described who and why as clear as I can make it. But I will try again.
Theoretical physicists assume Lorentz invariance must still apply.
If Lorentz invariance holds, the volume of a highly boosted volume in spacetime must go to zero as we increase the boost. This is impossible if a discretization simultaneously implies that such an area must have discrete eigenvalues of scale in a discretization of your choice.
Witness string theory, that is manifestly continuous on the world sheet. That measurements and the thereupon dependent concept of spacetime breaks down below Planck scales doesn't prohibit continuity of the physical entities that theory describes.
Too much is being made of the constructivist/classicist dichotomy; both systems are consistent and useful.
I think Wildberger is really complaining about the way math is taught and in this I am inclined to agree with him.
We teach constructively for the most part and then abandon these principles at a later stage for no good reason that I can see other than historical habit.
Wildberger follow up paper should also be read:
Numbers, Infinities and Infinitesimals
http://web.maths.unsw.edu.au/~norman/papers/Ordinals.pdf
"This paper outlines a more concrete and less philosophical approach to infinities and infinitesimals. It promotes the idea of thinking of an infinity not as an infinite set, but more simply as the growth rate of a function defined from natural to natural numbers. An infinitesimal then becomes the reciprocal of an infinity, and it is shown how these concepts allows us to recapture the more useful aspects of ordinal arithmetic. They also let us apply nonstandard analysis to everday calculus in an algebraically simple manner. This paper is a follow up to A) Set Theory: Should you believe?"
archgoon wrote:
What we are trying to do is define the natural numbers. The Peano axioms start by postulating either one or zero both of which are natural numbers, defining something by postulating the thing that you are trying to define is to put it mildly somewhat circular!
My sketch of a constructive definition started not with any natural number but with concrete objects 'marks' in collections or lists. I could have just as easily started with baskets of apples, jars of glass marbles, groups of trees (rather awkward as one would have to teach outdoors but one could combine the arithmetic and natural history lessons), pencil marks on a beer mat (that is how German waiters record the number of beers that one has drunken), chalk marks on a wall or arrays of pebbles. The last is how the Pythagoreans did arithmetic its known as psephoi arithmetic (psephoi is the Greek for pebbles). All of these things are concrete objects none of them is a number, natural or otherwise. To simplify the given situation the objects are placed in one to one correspondence with marks on a piece of paper and then proceed as in my earlier post. The natural numbers first come into existence through abstracting from the lists by applying an equivalence relation such as equally long; there are other possibilities. There is no circularity.
David wrote:
This assumption is simply wrong and all of the real numbers can be represented/constructed using Dedekind cuts. ;)
Maya Incaand Re:#52
I totally agree with you. Although I am myself not a constructivist I seem to have become their spokesman on this thread and I did so because having studied both logic and the foundations of mathematic with some of Germany's best constructivists I quite freely and happily admit that doing so opened my eyes to aspects of both disciplines of which I would never have been aware had I only studied the classical versions of them.
Torbjörn:
Lorentz invariance is very easy to avoid, actually. The quantum field theories that have full nonperturbative definitions are generally defined on a discrete lattice, which is assuredly not Lorentz invariance. There are good reasons based on symmetry why the effective theory at scales large compared with the lattice spacing looks Lorentz invariant, but this is merely further evidence that we do not need to have Lorentz invariance at the shortest scales.
Thony C.,
Just to make sure I have this straight, are you saying that ALL real numbers can be represented by a finite number of mathematical operations?
David:
Yes! Actually in several different ways.
David:
Yes, all real numbers can be represented by a finite number of mathematical operations.
I think what you're getting confused by is something from the definition of transcendental numbers. Transcendental numbers cannot be represented by any finite sequence of algebraic operations. So, for example, there's no finite algebraic equation for e.
But if you don't restrict to algebraic operations, there's no problem.
-Mark
Thony C.,
Thanks! I am now am going to spend several hours pouring over this link and its ramifications to my understanding of math and numbers. I have always been under the impression that not all real numbers could be represented this way.
Susan B.
I would say that a set $A$ is infinite if there is a 1 to 1 function $f:A \to A$ which is not onto. But this is obviously equivalent to your definition, so yes that is the proper definition of infinite. Your way has the adavantage that it doesn't require that you define the natural numbers first, and it is much more natural in any case.
Mark,
You were 100% correct. That is exactly what I was doing.
I don't understand how all real numbers can be represented by a finite number of mathematical operations. How could you ever get more than a countable number of numbers?
On a broader question, what are the advantages of the Axiom of Choice and other non-constructive methods? Since we get such apparent absurdities as Banach-Tarski, what are the offsetting good things that make mathematicians go down this road?
Ken: I'm not a mathematician, though I play one on the internet. But from what I understand, the AoC is equivalent to or necessary for a large number of seemingly obvious or necessary truths.
For example, AoC is equivalent to the statement "Every element in the set is either less than, greater than, or equal to any other element in the set."
Ken wrote:
Ken you are confusing two different things and in so doing you have put your finger into the wound that constructivists believe exists in the classical maths and that is the starting point for this thread.
When one says that all of the real numbers can be constructed in a finite number of steps it means that I can construct any individual real number that I choose, none of them is inaccessible to me or to my method. However, and its here that the classicists and the constructivists go their separate ways, I can not present you with all of the real numbers at one and the same time because there are infinitively many of them. For the same reason I can't even present you with all of the natural numbers. As a result the constructivists say to talk of the set of the real numbers or any other infinite set is complete rubbish because there is no such thing. The classicist however says that they have no problem talking about or even operating with infinite sets as if one could actually display them and as is well known Cantor laid the foundation of a complete arithmetic that deals with both infinite ordinals and cardinals. However there are snags and now we come to the second part of your question. In order to avoid those snags we have to introduce things like the axiom of choice a tool that was historically regarded as very dubious by many mathematicians and still is by the constructivists. On the other hand to paraphrase Hilbert the AoC and several other things detested by constructivists allow the classicists to remain in the paradise that Cantor has opened for us.
Xanthir wrote:
The problem is that it's also equivalent to or implies a number of seemingly impossible statements.
I'm guessing you're actually referring to Trichotomy here, which
doesn't apply directly to set elements, but rather to cardinality of sets, saying either |A| = |B|, |A| < |B|, or |A| > |B|.
There is a joke observation attributed to Jerry Bona that "The Axiom of Choice is obviously true, the Well-Ordering principle is obviously false, and who can tell about Zorn's Lemma?"
Most of these uses of the AoC are two lump two cases: finite and infinite, and only the infinite needs AoC. While I accept that my intuition is not going to be reliable for infinite sets, the infinite cases vary for me from "not obviously true" to "obviously not true".
Sorry, Trichotomy was indeed what I meant.
I don't tend to have problems with the infinite AoC questions, though. They're very interesting, but often don't have a mapping to the real world, so they don't bother me. The B-T paradox is one of those for me - there's no way to cut an actual sphere into pieces with non-measurable boundaries, so you can never turn an actual sphere into two spheres with the same size and density.
Tombjorn wrote:
Well, to be clear, I'm not entirely sure I actually agree with the sentiments expressed in my own post above. This is all very confusing... :)
Stephen Wells wrote:
Brett:
I'm not sure what you are saying here. The problem appears on smaller scales at boosts, if we try to adhere eigenvalues (say, for volumes or areas) to our discretization as a fundamental theory. If we are using a lattice for an approximative effective theory we don't do that, and have no problem AFAIU.
For example, AFAIK loop quantum gravity, causal dynamical triangulation and causal sets all fundamentally break Lorentz invariance because of this obstruction. While QCD on the lattice is a working effective approximation.
> Math has limits: it can't be both consistent and complete.
Goedel proved this for sufficiently rich math. Roughly, the one that allows infinite sets.
Wildeberg's opposition to defining infinite as "not finite" has some appeal, but does he consider Dedekind's definition "coextensive with a proper subset" or Russell's "not both well-orderable and reverse well-orderable"? (I couldn't get the link to work to check for myself).
Also, on Cantor, since Cantor didn't have a comprehension axiom (which Frege did) he didn't have to claim that concepts/conditions like ::not a member of itself:: or even ::self-identical:: actually have extensions. So, Cantor didn't face the Russell paradox immediately nor did he face paradoxes of the universal set, necessarily. Of course, this is partly because Cantor just doesn't say what conditions do or don't determine a set, which one might like him to do.
In any case, the history of all this is laid out pretty nicely in Peter Burgess's "Fixing Frege", which I just so happen to be reading right now. Indeed, I'm basically paraphrasing what Burgess says on at the start of section 1.4 "Russell's Solution" (hopefully accurately).
Burgess goes on to describe Russell's ramified type theory as stemming from the recognition that the paradoxes aren't just limited to set theory or to assuming actual infinities, contra Wildeberg. Russell actually saw the paradoxes that emerged for Frege's system as well as his own as related to the Grelling paradoxes which are perfectly finitistic, and the motivation for the theory of types in the distinction between predicative and impredicative definition aimed at a common solution to the Grelling paradoxes as well as paradoxes like the Russell set and those relating to the universal set. Wildeberg's citation of the Russell set as showing something uniquely problematic for set theory is at least out of line with Russell's own understanding of it as of a kind with paradoxes of impredicativity in general, including some that are finite and have nothing to do with sets.
Re: #71, "'Math has limits: it can't be both consistent and complete.'
Goedel proved this for sufficiently rich math. Roughly, the one that allows infinite sets."
Sorry, slawekk. That was a little too rough. Consider Presburger arithmetic, which is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who published it in 1929. It is not as powerful as Peano arithmetic because it omits multiplication.
Presburger arithmetic is infinite, yet Mojżesz Presburger in 1929 proved Presburger arithmetic to be:
* Consistent. All provable statements in Presburger arithmetic are true.
* Complete. All true statements in Presburger arithmetic are provable from the axioms.
* Decidable. There exists an algorithm which decides whether any given statement in Presburger arithmetic is true or false.
See, to begin with:
http://en.wikipedia.org/wiki/Presburger_arithmetic
and the references cited therein.
I think that you made an assumption about Godel's results which isn't true.
We also know something about how long the proofs are in Presburger arithmetic.
Also Re: 71.
Goedel shows that efforts to capture math with 1st order formal theories has limits. Of course, math also has limits. It won't decide all kinds of empirical questions, for example.
Re: #73 Jonathan, thank you for correction. What I meant to say was that this part of Mark's critique does not make much sense as Wildberger rejects all formal systems to which Gödel's incompleteness theorem applies.
Not strictly true. Gödel's proof is constructive and applies equally to constructive systems.
Sorry Coin but you are wrong. Every item in the set of real numbers can be constructed. However as you correctly point out the real numbers, being non-denumerable cannot be listed. These are two not directly related facts that you are conflating. Dedekind cuts or Cauchy sequences are not definitions of real numbers or instructions for listing real numbers they are instructions for the construction of real numbers, as such they can construct any individual real number without restriction. Listing all of the real numbers is a whole different thing all together.
It would be nice if Mark or Thony could provide a link (even a book reference) to a precise discussion of how all real numbers can be represented by a finite number of mathematical operations. David and Coin and myself don't get it.
The link Thony gave about the construction of real numbers doesn't seem to say anything about a finite number of operations.
Ken:
As far as I can see the problem appears to be a linguistic one and to be related to the different possible meanings of the word 'all'. 'All' has two related but fundamentally different meanings, on the one hand when applied to some group, aggregate, collection, class, set etc. it can mean the whole of that set taken as an entirety on the other it can mean each individual member of the set taken separately. With reference to the construction of the real numbers it is possible using Dedekind cuts or Cauchy sequences to construct each and every real number i.e. 'all' in the second meaning given. There are no real numbers that are different in some way and thus are not accessible to these methods of construction. It is of course a trivial and obvious truth that one cannot list, display, present or view all of the real numbers at one go in the first sense of 'all' as given above. One can, of course, not do this for any infinite set whether denumerable or non-denumerable, which is the main reason why constructivists reject talk of infinite sets this is also the reason why the axiom of choice is necessary. Although it seems to be intuitively obvious that one can choice one item out of each one of a given number of sets, when that given number is infinite, i.e. not displayable even in theory, one can not be sure that the process of selection is actually possible and in order to guaranty that this selection process is possible it is postulated as an axiom.
Coming back to your original question, it is possible to construct each and every real number as an individual using a finite number of mathematical steps but it is not possible to present all of the real numbers in their entirety in a finite number of steps. Classical mathematics tries to circumnavigate this problem by the use of the axiomatic method however even that is not free of problems but that as they say is another story
If you don't mind Thony, let me give you a specific example and then you should be able to see where Coin and I are having trouble.
How would you represent pi as a Dedekind cut?
However, before you answer, assume that you don't know an algorithm for generating pi and that pi doesn't represent the ratio of a cirle's circumference to its diameter -- assume for a second that it is one of the zillions of ordinary real numbers that exist. I think once Coin and I understand how this example is represented as a Dedekind cut then everything would fall into place for us.
I said Coin in the last message, but I meant Ken.
In that case nothing will fall into place for you because the simple answer to your question is that you can't. In order to construct a specific number you have to know what number you wish to construct. All that Dedekin cuts or Cauchy sequences make possible is a complete construction of a given real number it does not allow you to construct a number of which you are not aware. The question is why would you want to? If you don't know a number and you have no use for it why would you want to construct it? Pi is interesting exactly because it is the ratio of the diameter of a circle to its circumference or alternatively it is the result of various infinite sums. We are interested in Pi because we know it exists and we have a use for it. What is of course important is the proof that for any given real number then it is possible to construct it using a Dedekind cut, this is what completeness means.
Constructivists argue against the assumption of real infinities for exactly this reason we don't need or use them. All the numbers that we use in reality are finite so why bother with real infinities. There are according to Cantor non-denumerably infinitely many real numbers. Anybody solving a real mathematical problem only needs finitely many of them so we only use those that we need. Dedekind cuts are purely a formal apparatus that allows us to be sure that the real numbers that we use are consistent and that we can apply the arithmetic operators to them without fear that something strange will happen.
David:
The problem is, you're really asking for a constructivist explanantion of how to create arbitrary unspecified real numbers.
This is, ultimately, an axiom of choice issue. The AoC can be used to prove that there is a finite arithmetic construction of any real number. It doesn't say that for a particular number, we know how to find it.
This is exactly the kind of result that makes constructivists hate the AoC: because it produces results that say that things exist, that very specific things exist, while putting them beyond reach. It says that these things are constructable, only we can't, in general, find the constructions.
When it comes to things like this, I have a lot of sympathy with the constructivists. But as a computer scientist, I deal with the inevitable results of AoC issues all the time - so many of the questions of computability ultimately wind up resting on the AoC, or one of its equivalents.
MarkCC:
Reading through our two post I think we both say the same thing in different words however there is one point in your post that I think is misleading. Dedekind cuts can and do deliver a proof that there is a finite aritnmetical construction for any real number and do so without the AoC which is why it is the system that is prefered by the constructivists.
So, if there are no infinite sets, does that mean that there is a greatest integer?
Yes. It hasn't been found yet, but it's known to be somewhere between 264 and A(163, 163).
There exist multiplie problems with Mr. Wildberger's reasoning and technique as given. Although, there exist some other problems here too:
[Set theory works in conjuction with first order predicate logic.]
Sure.
[You can build all of math using those two - but take away sets, and the logic doesn't have a model; take away the logic, and you can't reason about sets.]
No, simply NOT the case. We can't build second-order logic or a multiple-valued logic or fuzzy set theory or fuzzy calculus or fuzzy arithmetic. Second, one CAN take away two-valued logic as absolute and still reason about sets... we just reason about a different class of sets involved.
Also, he does speak correctly about set theory not working as a foundation for all of mathematics. As you said "Euclid defined the axioms of geometry. He added the parallel axiom, even though he didn't think it should be necessary. It turns out that without it, you can get different results. You can't get to standard Euclidian geometry without defining the answer to "Given a point p and a line l, How many lines parallel to l pass through p?" as being 1. There's no reason that it has to be. You get different geometries if you pick different values. But that's fine! The different geometries are interesting, and have interesting applications."
If one doesn't assume the "parallel axiom" that members of sets either belong or don't belong to sets, then we get a different set theory. Other types of mathematics then exist, just as there exist different geometries than Euclid's. One can rather easily construct even other arithmetics using something like the extension principle of fuzzy set theory. If we use the max-product version insteads of the max-min version or the drastic-drastic version we can get different results for simple arithmetical equations like 2+3=5 or 2x3=6 when the degree of membership does not equal 0 or 1. But, this actually involves constructing examples. Wildberger doesn't do this. He basically just complains (as presented) and then does NOT go on to show how classical set theory doesn't cover all of math.
Gee... look at that. The "construcivist" MERELY object to a theory without constructing counterexamples of how it doesn't work. Seriously, it doesn't work as all too hard to construct examples to show that classical set theory doesn't cover all of math. Can't a constructivist like Wildberger apply constructivism instead of just whining?
[We can't build second-order logic or a multiple-valued logic or fuzzy set theory or fuzzy calculus or fuzzy arithmetic.]
[Also, he does speak correctly about set theory not working as a foundation for all of mathematics.]
Doug, we've had this discussion before, and as I and several others told you, this is wrong.
Thony and MarkCC,
I appreciate your patience. I understand how I can use a Dedekind cut to represent any specific real number that I desire with the caveat that I actually have an instance of that specific real number. If I have an instance of a specific real number then that fact implies that I somehow generated it. I also understand that this does not imply that the number was necessarily generated by algebraic means. For example, the real number could be generated by a trigometric function (or any type of function for that matter). So far so good.
Part two is where I start to have a conceptual problem, but I think I am getting it. Part two involves real numbers in which I do not have a specific instance. If I understand Mark correctly, the AoC can be used to prove that there is a finite arithmetic construction of any real number, so even if I don't have a specific instance of a real number, but only have an abstract real number in the generic sense, then I can rest assured that there is a finite arithmetic construction for that generic number. If I understand Thony correctly he is saying that a Dedekind cut can be used to represent any specific real number and that since a Dedekind cut can represent ANY real number this also implies that ALL real numbers can be represented by Dedekind cuts.
One last point -- just because I can represent any and all real numbers with a Dedekind cut, or with a finite arithmetic construction based on the AoC, it does not mean that the reals are countable. The set of reals are still uncountable. Cantor's diagonal proof (and other proofs) that real numbers are non-denumerable still holds. Why is this? In the case of integers I can count them because I can put them in some kind of order (same with rational numbers, irrational numbers, algebraic numbers, etc.). However, even though I can describe a specific real as a finite arithmetic construction I still can't describe the next or previous real number in the sequence.
Am I in the ballpark here?
BTW, I was really enjoying Mark's posting on Set Theory. Do you plan to revisit that some time? If I remember correctly, you left your textbooks on a train or something.
I guess this really isn't that big a deal, but...
I fully understand how the dedekind cuts provide a complete construction for the set of real numbers.
What I don't see is any reason to say that the dedekind cuts allow us to produce a construction for each object which is an element of the set of real numbers. For individual reals, Dedekind cuts seem to have the same issues as any other method of constructing reals. A dedekind-cut real is defined by the pair of sets of reals which are less and greater than the cut. It would seem then that in order to say one has "constructed" an individual real via dedekind cut, you have to produce the definitions of the less-than and greater-than sets.
If our dedekind cut falls on, say, pi, then it is easy to produce these sets because we know something about the properties of pi and there are various methods by which one could determine whether a real is less than or greater than pi, even without resorting to calculating a decimal expansion or whatever of pi itself. There will however be reals which exist, but for which there is no way to finitely describe those less-than and greater-than sets! If your less-than and greater-than sets are unconstructable, the real that lies between them will be unconstructable by the dedekind cut method.
There shouldn't be anything surprising about this-- the idea that there are sets which we can construct, but which have contained elements or subsets which we don't have a way to construct, is it seems to me the reason why we need the Axiom of Choice in the first place. Conversely, it doesn't tell us much to take refuge in something like "The question is why would you want to? If you don't know a number and you have no use for it why would you want to construct it?". I certainly can't think of anything, but surely there must be something interesting about the unconstructable subsets of aleph-1, or we wouldn't bother to define an axiom of choice so that we could access those members in the first place.
Am I confused on some point here? I will say, I think one thing that would really help a lot here is if we had a specific definition of what it means "to construct" something.
If we're constructing the reals from the rationals then we're going to define the reals as the set of Dedekind cuts of the rationals, so it's trivial that there's a Dedekind cut for every real since every real just is a Dedekind cut.
Coin says:
Not quite. Here's the definition of Dedekind cut from Mathworld:
http://mathworld.wolfram.com/DedekindCut.html
The pairs of sets in question are sets of rationals, not sets of reals. It wouldn't do to define the reals from the reals. But you're right about the definitions required. You have to have an ordering of the rationals before you can do this construction.
Anyhow, I take your question to be a good one. Let me rephrase it (and correct me if I screw it up). As noted above there's a Dedekind cut for each real, since each real just is a Dedekind cut. But that doesn't imply that each real is constructible unless each Dedekind cut is constructible. So, is each Dedekind cut constructible? I think the answer is "no". All constructible numbers are algebraic. The set of algebraic numbers is countable. The set of Dedekind cuts is uncountable.
The underlying issue here is possibly an equivocation on "construction", so I think that your concluding comment was spot on. When we speak of constructing one set from another we use "constructible" in the sense of "definable as sets of"; see the definition above of a Dedekind cuts, for instance. When we speak of a number being constructible we mean "defined by a finite number of algebraic operations on integers". These seem to be two different senses, hence your conclusion that a constructible set can have non-constructible members.
More:
http://mathworld.wolfram.com/ConstructibleNumber.html
[Doug, we've had this discussion before, and as I and several others told you, this is wrong.]
You never demonstrated such, while I still came up with counter-examples to what you said. I also pointed out counter-examples above AND you have still not pointed out a flaw in my reasoning. Here's another case-in-point.
Let's look at the simple equation 2+2=4. According to crisp set theory, our answer always yields 4 as the result. Let's say that the number 2 has the following degrees of membership or degrees of closeness to 2
{(0, .2), (1, .5), (2, 1) (3, .5)}. Now, suppose we
use the max-min extension principle. Then 2+2 yields
{(0, .2), (1, .2), (2, .5), (3, .5) (4, 1) (5, .5) (6, .5)}. We already don't have 4 as always working as our answer. Also, you simply can NOT get the 5 with a degree of membership of .5 from the crisp addition of 2 and 2, whether we added a crisp point number like 2 or a crisp interval number like [1, 3]. The (5, .5) appears only from the extension principle at work and how max and min work.
Now suppose we use the max-product extension principle (I'll explain how to compute with these principles if need be... but since you claim superior knowledge over me in terms of how fuzzy set theory ultimately works, I won't patronize you). Then we have:
{(0, .04), (1, .1), (2, .25), (3, .5), (4, 1), (5, .5) (6, .25)}
We have a different "4" than above. Both yield only 4 in the crisp case, but in the strictly fuzzy cases where degrees of membership don't equal one we have different results, as I feel you can plainly see. But, then the addition of fuzzy numbers yields non-identical results depending on WHICH addition operator you use.
In crisp set theory there exists only one additon operator. Again, both operators work out as identical in the crisp case. Consequenlty, it appears that crisp set theory would predict that both results would come out the same. But, they don't come out the same. And there actually exists an infinity of other logical operators at use within the opeartor of addition possible which yield different results here, or for other simple arithmetical problems. I simply don't see crisp set theory implying that addition can yield different results when the degree of membership of the elements involved equals a number between 0 and 1. Nor can it really, as crisp set theory DOES NOT talk about elements with degrees of membership between 0 and 1. It ONLY talks about elements that either belong or don't belong to a reference set (degree of membership of 0 or 1) and their corresponding sets, families, classes, etc.
Again, the first axiom of crisp set theory states "Two sets are the same if and only if they have the same elements." Yet, with fuzzy set theory a set like {(4, .6)} consists of the element 4 and its respective degree of membership of .6, while {(4, .3)} has the exact same element 4 with its respective degree of membership of .3. Yet, the axioms of extension predicts such. If you want to use the formal version of the axiom extension, then I'll introduce the notion of alpha cuts. I'll say that the alpha cut for belonging or membership in a set comes as .2 (I'll explain the alpha cut concept, but again I won't patronize you intentionally). Consequently, {(4, .3)} works out as a member of our reference set, as does {(4, .6)}, since both have degrees of membership greater than the alpha tolerance level. But, it still remains that the fuzzy sets do NOT work out the same, since they have different degrees of membership. Sure, one can modify the axiom of extension to make it so that the 4s come out different. But, then one introduces degrees of membership or something similar and consequently one has a different theory than crisp set theory with just first-order logic. If you disagree with these points, then please explain how crisp set theory predicts different results such as the different results for the addition of 2 and 2, how it talks about elements with degrees of membership between 0 and 1, and so on.
Again, you simply didn't do this. You appear to keep holding onto the philosophical illusion that crisp set theory comes as an absolute foundation for all of mathematics in the face of contrary evidence. What gives?
Uh...
Let's look at the simple equation 2+2=4. According to crisp set theory, our answer always yields 4 as the result. Let's say that the number 2 has the following degrees of membership or degrees of closeness to 2
...okay, back up. Why can we say this? Are you sure this even means anything? You talk about "crisp set theory" but then you immediately start using concepts from fuzzy set theory. Where does fuzzy set theory come from? Based on what do you define it? Does it exist? (For that matter, based on what do you define ".2", your claimed "degree of membership" between 0 and 2?)
You then go on to try to claim that because the ideas of fuzzy set theory do not apply to "crisp" set theory, you can reject the idea "crisp set theory comes as an absolute foundation for all of mathematics"-- your claim seems to be that since set theory produces different results from fuzzy set theory, set theory can't encompass whatever part of mathematics where fuzzy set theory resides. This is pretty silly since fuzzy set theory is constructed using set theory. Fuzzy sets are built up from objects that can be defined purely using set theory-- things like membership functions, and the real numbers which provide the weights for those membership functions, can be defined in terms of set theory.
Now, you could just ignore "crisp" set theory and define fuzzy set theory as the basic axioms we happen to be choosing to work with-- kind of like how we don't need set theory to say 2 + 2 = 4 either, we can just take peano arithmetic as the axioms we start with. But it would be something of a mess, because the definition of fuzzy sets incorporates all these heavyweight objects like real numbers, and you'd have to define the axioms for all of those, too. If you're going to try to start with fuzzy sets, the meaning of that ".2" has to come from somewhere.
If we're talking about foundations it's much easier to just start with sets (or propositional logic for that matter) as being assumed to be the most basic underlying set of axioms, and then say that functions and real numbers are constructed from set theory and fuzzy sets are constructed from sets + functions + real numbers. Doing it this way is equivalent to what you seem to be trying to do, which is just starting out with axioms which consider fuzzy sets to be the most fundamental thing, and then considering "crisp" sets as just a special case of this one fundamental thing (where all membership weights are 100%) -- but taking normal sets as the foundation is way more parsimonious.
What you're saying is like saying that a house is not a molecule, yet a house is matter, therefore molecules cannot be the absolute foundation for all matter.
Jeremy:
The pairs of sets in question are sets of rationals, not sets of reals. It wouldn't do to define the reals from the reals.
Whoops! Didn't realize that, in retrospect I guess it should have been obvious. :)
Set, members... oh, duh! Well, thanks Jeremy and Coin (and Tony, Mark, and others) for uncovering an equivocation.
We have not only evolved to be pattern detecting, our training recognizes that it is beneficial to find general and economical similarities. And when the differences do make themselves important, it can be darn difficult to backtrack exactly where we made the equivocations.
[Let's look at the simple equation 2+2=4. According to crisp set theory, our answer always yields 4 as the result. Let's say that the number 2 has the following degrees of membership or degrees of closeness to 2
...okay, back up. Why can we say this?]
Well, the human nervous system allows us to do so. It comes as a characterization of the number 2, just like we characterize 2 by saying 2=2. Honestly, I think it a good question to ask why we can talk about degrees of membership or closeness. Still it remains that we can do so just like how we can ask about why a computer in front of us exists, but it remains that it does exist however we answer the question.
[Are you sure this even means anything?]
It works out mathematically according to definitions, so I don't see a need for it to "mean" anything.
[Where does fuzzy set theory come from?]
Fuzzy set theory has a fuzzy logic (an infinite-valued logic) or at least a three-valued logic as its background similar to how crisp set theory has two-valued logic as its background if that's what you mean to ask.
[Does it exist?]
It exists as a mathematical construct. If you don't accept this, please explicate what you mean by "exist".
[For that matter, based on what do you define ".2", your claimed "degree of membership" between 0 and 2?]
.2 comes between 0 and 1. Second, I can base my "degree of membership" here on my subjective intuition. Or I could do a survey and get an average of other people's intuition to make such information intersubjective. Irregardless of how I came to (0, .2) for the number 2, it still works out that the degree of membership of .2 exists as understood and useable. Much like irregardless of how a chair got built, I can still sit in it.
[This is pretty silly since fuzzy set theory is constructed using set theory.]
No. Again, the DEGREES OF MEMBERSHIP concept appears nowhere in crisp set theory. I've shown this by talking about the axiom of extension above. Second, fuzzy set theory has fuzzy logic as a background as opposed to a first-order logic as does crisp set theory. I don't see anything silly here, unless you can DEMONSTRATE that fuzzy set theory gets derived from crisp set theory without using at least a three-valued logic and only first-order two-valued logic.
[Fuzzy sets are built up from objects that can be defined purely using set theory-- things like membership functions, and the real numbers which provide the weights for those membership functions, can be defined in terms of set theory.]
One does NOT have to use real numbers to talk about fuzzy sets. One can use words. We could say that the number 4 has a high degree of membership in our reference set. Or, more concretely, the letter 'y' having a high degree of membership in the set of vowels (of course feel to disagree with me there). The term "high" has imprecise boundaries, and consequently indicates the vowel 'y' belongs to a fuzzy set to some non-crisp degree.
Second, fuzzy set theory does use membership functions. In crisp set theory a membership function... or indicator function to avoid confusion... indicates an answer to whether or not something does or does not belong to a set. It gives us a "yes" or a "no". Basically, indicator functions in crisp set theory must do this according to their definition. A fuzzy set does NOT, I repeat DOES NOT, gives ua "yes" or a "no" answer as to whether an object belongs to a set, just like saying that 'y' is sometimes a vowel does not answer the question whether or not 'y' qualifies as a vowel. The membership function tells us HOW MUCH or the DEGREE OF BELONGING of an element to a set. The "how much" and the "degree of belonging" parts, again, appear nowhere in the idea of an indicator function in crisp set theory.
Third, there exist a well-known parallelism between set theory and classical logic. Idempotency, absorption, the pricniple of the excluded middle, and the principle of contradcition hold for both classical two-valued logic and crisp set theory. One could say that they mimic each other in terms of basic structure. A fuzzy set theory using 'max' and 'min' for union and intersection doesn't mimic classical logic. Other fuzzy set theories using other logical operators also don't mimic classical logic in terms of basic structure in every respect, though do in some respects. I'll make this a little more clear.
Crisp set theory implies (or assumes) that, for crisp sets, the property of idempotency ALWAYS holds. But, it doesn't always hold for fuzzy sets. Case-in-point: consider the product operator for intersection. Then an element with degree of membership of .5 intersected with another element (or the same element) with degree of membership of .5. I.E. (a, .5)*(a, .5)=(a, .25). Then our degree of membership equals .25. If idempotency held, then our degree of membership would equal .5, as crisp set theory predicts will hold for all sets, i.e. (a, .5)*(a, .5)=(a, .5). But, it doesn't hold for our fuzzy sets which use the product operator for intersection.
[But it would be something of a mess, because the definition of fuzzy sets incorporates all these heavyweight objects like real numbers...]
No, the set {y} consists of a fuzzy subset of the set of vowels. A group of naturally born hermaphrodites consists of a fuzzy set of the set of males.
[If you're going to try to start with fuzzy sets, the meaning of that ".2" has to come from somewhere.]
That ".2" doesn't have meaning within context until we have the DEGREE OF MEMBERSHIP concept. That degree of membership concept has to come from somewhere, and I simply don't see it coming from crisp set theory, do you? Where do you think the degree of membership concept comes from crisp set theory, really?
[If we're talking about foundations it's much easier to just start with sets (or propositional logic for that matter) as being assumed to be the most basic underlying set of axioms, and then say that functions and real numbers are constructed from set theory and fuzzy sets are constructed from sets + functions + real numbers.]
If by 'propositonal logic' you mean first-order or any two-valued logic certainly not. Also, you'll have the same result with sets. If we do so, we have the principle of contradiction, idempotency, and so on, as basic axioms. But, then we'll start jettisoning axioms as assumptions once we get to fuzzy set theory, since we'll end up having counter-examples against them.
[Doing it this way is equivalent to what you seem to be trying to do, which is just starting out with axioms which consider fuzzy sets to be the most fundamental thing, and then considering "crisp" sets as just a special case of this one fundamental thing (where all membership weights are 100%) -- but taking normal sets as the foundation is way more parsimonious.]
Nope, hardly at all. First off, I don't consider fuzzy sets as the most fundamental structure. There also exist neutrosophic sets which come out as more general than fuzzy sets. It does work out that every crisp set qualifies as a special case of fuzzy sets on [0, 1]. One can define a (numerically valued) fuzzy set as a set having ANY degree of membership in [0, 1] for its indicator function with respect to a reference set. Since, every crisp set has an indicator function in {0, 1}, and both members of {0, 1} come as a subset of [0, 1], we have that every crisp set also qualifies a fuzzy set and the converse does not hold.
Lastly, taking normal crisp sets does NOT work as more parsimonious. The logic involved for crisp set assumes the principles of De Morgan duality, commutavity, associativity, involution, absorption, idempotency, distributivity, the principle of identity, the principle of contradiciton, and the princple of the excluded middle or 10 basic prinicples for each operation, or 19 basic prinicples (since involution doesn't have a dual). Taken the added assumption that the intersection, union, and complemenation of objects belongs to {0, 1} from something oringally equalling 0 or 1, we'll say 20 principles to make things nice. With at least a three-valued logical system at work basically every priniciple can go out (depending on which three-valued one works with). Placing this in a more general setting of a system of all three-valued logics this means that all those principles go out as necessary logical assumptions (except for perhaps the prinicple of identity). I can restate my logical assumptions another way.
For the basic logic proposed let i(a, b) indicate the operation of intersection on a and b with a in the first position, and u(a, b) represent the operation of union, and c(a) represent the complement of a. Then I ONLY need the following assumptions (adapted from Buckely and Eslami's _An Introduction to Fuzzy Logic and Fuzzy Sets_):
1. 0<=a, b<=1 and i(a, b), u(a,b), c(a) all fall in [0, 1]. 2. i(1, 1)=1.
3. i(0, 1)=i(1, 0)=i(0, 0)=0
4. u(0, 0)=0
5. u(0, 1)=u(1, 0)=u(1, 1)=1
6. c(0)=1, c(1)=0.
That's it! I ONLY need 6 principles (by this counting method) while the same counting method yields 19 principles for crisp set theory to just get off the ground. It looks like you need over three times the number of basic logical assumptions for crisp set theory to get off the ground. The conclusion seems plain enough: crisp set theory comes as significantly less parsimonious in its assumptions than fuzzy set theory does.
[What you're saying is like saying that a house is not a molecule, yet a house is matter, therefore molecules cannot be the absolute foundation for all matter.]
Interestingly analogy, but the problem here lies in that fuzzy set theory doesn't work out as "the house" since it actually has less assumptions and therefore has a "smaller size" than the house of crisp set theory... to speak very metaphorically.
Doug, you're putting forth different objects and then incorrectly claiming that ZFC says they must behave the same. Though they're both called "addition", those are two different operators, and thus there's no reason to think that ZFC says the same things about them.
[In crisp set theory there exists only one additon operator.]
Actually, in the language of ZFC there is no addition operator. The only symbol (beyond the basic logical symbols of ∀, ∃, etc.) is â. Integer addition is then a defined notion, and there's no reason we can't define other operators as well. I shall demonstrate.
First I need to construct fuzzy sets in ZFC. Let's assume we've already constructed the reals (as described elsewhere in this thread) and functions. So there's no confusion, let +0 and *0 be the standard addition and multiplication on the reals. I now define a fuzzy set to be a function f with range contained in [0,1].
Now, if f and g are both fuzzy sets with domains each a subset of the reals, I will define +1 as follows:
f +1 g is the function with domain {a +0 b : a â domain(f) and b â domain(g)} with (f +1 g)(c) = max{ min{f(a),g(b)} : a â domain(f) and b â domain(g) and a +0 b = c}.
Notice that +1 is precisely the max-min addition you described earlier.
Now, I define +2 as:
f +2 g is the function with domain {a +0 b : a â domain(f) and b â domain(g)} with (f +2 g)(c) = max{ f(a)*0g(b) : a â domain(f) and b â domain(g) and a +0 b = c}.
Notice that +2 is precisely the max-product addition you described earlier.
Both of these concepts were constructed in ZFC. Even though they're both called "addition", they're not the same operation, so ZFC doesn't say they both behave the same way.
Now, given a fuzzy set f, I define â* as follows:
a â* f iff a â domain(f) and f(a) > 0.
Then 4 â* {(4, .6)}, and 4 â* {(4, .3)}. However, Extensionality doesn't come into play. Extensionality is a statement about â, not â*. So ZFC doesn't claim that these two fuzzy sets are the same.
In fact, .6 â {4, .6} â (4,.6) â {(4,.6)}, while .3 â {4, .3} â (4,.3) â {(4,.3)}. So by repeated applications of Extensionality, ZFC claims that these two are different.
[You appear to keep holding onto the philosophical illusion that crisp set theory comes as an absolute foundation for all of mathematics in the face of contrary evidence. What gives?]
You're right that claiming that set theory is a foundation for all of mathematics may be a bit too much of a claim, since I'd be hard pressed to come up with a definition of "mathematics" that we could all agree upon. However, I can claim that it is a foundation for everything that has currently been developed and called math. I hope I've illustrated how this is true.
[unless you can DEMONSTRATE that fuzzy set theory gets derived from crisp set theory without using at least a three-valued logic and only first-order two-valued logic.]
Tada!
"...okay, back up. Why can we say this?]"
Well, the human nervous system allows us to do so.
I for one put very little faith in that particular organ. I have known it to do some very strange things on occasion.
David if you want to know more on Dedekind cuts the best source is the man himself in his Essays on the Theory of Numbers
[Doug, you're putting forth different objects and then incorrectly claiming that ZFC says they must behave the same. Though they're both called "addition", those are two different operators, and thus there's no reason to think that ZFC says the same things about them.]
First consider the max-min operator '+' for (2, 1)+(2, 1). We get 4, as we take the minimum of 1 and 1, then the maximum of all possible degrees of membership given, which gives us 1. So, we have (4, 1) which means we have 4. Second, consider the max-product operator for (2, 1)+(2, 1). Again, we get 4. Consequently, ZFC says the exact same thing about 2+2 in this case. The reason that ZFC says the same thing about those operators in this case comes as that ZFC will always say the same thing about any union-intersection way of operating for a fuzzy operation like fuzzy addition, when we restrict membership values to {0, 1}.
[I now define a fuzzy set to be a function f with range contained in [0,1].]
O.K., then there exists a set which has an indicator function which maps to a value like .5. Consequently, the notion of an indicator function mapping to {0, 1} doesn't always hold for the sets given. The meaning of this comes as that the function maps the element of such that it no longer has to either belong or not belong to the set. Again, if the function given works like a fuzzy set, then it has a DEGREE of membership involved. Members of crisp sets either belong or don't belong to their reference sets.
[f +1 g is the function with domain {a +0 b : a â domain(f) and b â domain(g)} with (f +1 g)(c) = max{ min{f(a),g(b)} : a â domain(f) and b â domain(g) and a +0 b = c}.]
I don't understand the '?' notation.
[Both of these concepts were constructed in ZFC. Even though they're both called "addition", they're not the same operation, so ZFC doesn't say they both behave the same way.]
When we talk about 2+2, ZFC says that they do always behave the same way, as both yield 4. For 2+2 in fuzzy set theory they don't always do so.
[However, I can claim that it is a foundation for everything that has currently been developed and called math. I hope I've illustrated how this is true.]
No, because you haven't illustrated how mathematics based on multiple-valued logic, such as fuzzy arithmetic, fuzzy differntial calculus, systems of fuzzy equations, etc. does so.
[[unless you can DEMONSTRATE that fuzzy set theory gets derived from crisp set theory without using at least a three-valued logic and only first-order two-valued logic.]
Tada!]
Alright, on the flip side... let's suppose I accepted that you had done so. Then, let's see where this goes. As is well known, for sets the principle of contradiction ALWAYS holds: the intersection of a set and its complement always yields the empty set or a^c(a)={}. Given the axiom about complements above I can define the complement of a fuzzy set c(a) as 1-a where a indicated the membership function involved. So, let's take c({4, .6}). We get {4, .4} as the complement of {4, .6}. Using minimum as the intersection operator, we get {4, .4} as the intersection of these two sets. This set, as plain as the noonday Sun, is NOT the empty set, nor correspondent with the empty set as (4, 0) would be. In other words a^c(a)={} comes up as false here. Consequently, IF I ACCEPTED YOUR DERIVATION OF FUZZY SET THEORY, THEN CRISP SET THEORY BECOMES INCONSISTENT.
One could very well say that either crisp set theory doesn't lead to all of mathematics, or it works out as inconsistent in that it allows one to derive contradictions. I think Godel said as much in his preface to his famous
_On Formally Decideable Propositions..._, but I don't have an exact quote here, although remember that Godel's propositions apply to crisp set theory just as well as they do to _Principia Mathematica_.
[I for one put very little faith in that particular organ.]
Well, I didn't say that my characterization worked as true and real, nor that the notion of reality applies to it. Still, it remains mathematically permissible.
As another example, crisp set theory says that the union of a set with itself ALWAYS yields the same set, or AvA=A. Let's us the "probability" operator u(a, b)=a+b-ab for the union operator. Consequently, the union of the set {(2, .5)} with itself yields {(2, .75)}. Again, we don't have the same set here. So, if crisp set theory really does allow one to derive fuzzy set theory, then one can derive a counter-example to the crisp set-theoretic theorem AvA=A. In other words, one can disprove a crisp set-theoretic theorem. Consequently, crisp set theory qualifies as inconsistent.
The claim seems to be that, if you define "union" to mean something other than the union of two sets, then identities involving the union of sets don't hold true of the "union" of two sets.
In other news, if you define a crocodile as a herbivore, then there exists a carnivorous herbivore. So?
[The claim seems to be that, if you define "union" to mean something other than the union of two sets, then identities involving the union of sets don't hold true of the "union" of two sets.]
Not exactly, as a+b-ab always works just like the union of two crisp sets, when we talk about sets with characteristic functions that map to {0, 1}. For instance, for the reference set {3, 4, 5, 6} the union of
{4, 5} with {3, 4} using the union operator works like this we do things in full detail. We first write our sets with all characteristic functions implicit in the above:
{(3, 0), (4, 1), (5, 1) (6, 0)}
{(3, 1), (4, 1), (5, 0) (6, )}. So, for 3 we have 0+1-(0)(1)=1 as the resulting indicator function, for the 4 member we have 1, and for the 5 member we have 1. So, we have {(3, 1) (4, 1) (5, 1) (6, 0)} as the resultant set. According to convention we don't write elements with indicator functions of 0, or indicator functions in general, so we write {3, 4, 5} for the resultant set. So, let me make this clear... I didn't define "union" as something other than the union of two sets. I used an operator which ALWAYS numerically models the union operator accurately.
If crisp set theory does serve as a basis for fuzzy sets, then the operators of crisp set theory DO apply to fuzzy set theory also. In other words, crisp sets CAN model fuzzy sets if they serve as a basis for fuzzy sets. But, again idempotency, or the excluded middle, or the principle of contradiction, etc. ALWAYS holds for crisp sets. Crisp sets can NOT and do NOT model cases such as the union of .4 and .4 for the operator min(1, a+b)... or if they do, then crisp set theory has internally inconsistencies. Fuzzy set theory indicates crisp set theory as either incomplete or inconsistent through purely numerical means really. Fuzzy sets CAN and DO model crisp sets when you restrict the values of the membership function of fuzzy set theory to the values of the indicator function {0, 1} for EVERY fuzzy set operator. What gives with you guys rejecting this, when I keep throwing more examples at you? Does all this come as THAT shocking? If you claim I really have a mistake where lies it?
Nothing that you have said in anyway justifies this claim and it is you who cannot see the truth and not the others.
[Consequently, the notion of an indicator function mapping to {0, 1} doesn't always hold for the sets given.]
Good thing I didn't say that. I said [0,1], not {0,1}. [0,1] is the unit interval: all real x such that 0 ≤ x ≤ 1.
[I don't understand the '?' notation.]
I didn't use any question marks. Is the html not going through correctly? In the bit you quoted, the only html I used was subscripts and "isin".
[First consider the max-min operator '+' for (2, 1)+(2, 1). We get 4, as we take the minimum of 1 and 1, then the maximum of all possible degrees of membership given, which gives us 1. So, we have (4, 1) which means we have 4. Second, consider the max-product operator for (2, 1)+(2, 1). Again, we get 4.]
Okay, let's go through this. Following the definition of +1, ZFC says {(2,1)} +1 {(2,1)} = {(4,1)}
Following the definition of +2, ZFC says {(2,1)} +2 {(2,1)} = {(4,1)}. All okay so far. So yes, ZFC (correctly) says that +1 and +2 agree on these values. This is not a problem.
[Consequently, ZFC says the exact same thing about 2+2 in this case.]
This set is not 2. 2 in ZFC is 2. This is {(2,1)}. Surely you see that they are different. The second might be the fuzzy set 2 as implemented in ZFC, but that's not the same thing as 2. Just like the real number 2 (as built by Dedekind cuts) is not the same as the rational number 2 (an ordered pair (2,1)) is not the same as the natural number 2 (which is simply 2).
This seems to be the root of your problem. You're identifying objects in ZFC with their corresponding object in the implementation of fuzzy sets, and then attempting to apply statements about one to the other to derive a contradiction. This is wrong. They are not the same thing.
[When we talk about 2+2, ZFC says that they do always behave the same way, as both yield 4. For 2+2 in fuzzy set theory they don't always do so.]
Again, you're claiming that ZFC believes that two different operations, which happen to have "addition" in their title, are the same. There is no basis for this other this.
[Given the axiom about complements above I can define the complement of a fuzzy set c(a) as 1-a where a indicated the membership function involved.]
Again, you make the same mistake. Fuzzy complement is not the same as strict complement, the fact that they have a word in common notwithstanding. There is no justification for claiming that statements about one should also hold true of the other.
[Let's us the "probability" operator u(a, b)=a+b-ab for the union operator.]
My same bait-and-switch objection.
Stephen says it so much more succinctly than I.
[Nothing that you have said in anyway justifies this claim and it is you who cannot see the truth and not the others.]
A claim unsubstainted qualifies as mere opinion. Please try to support your claim as Mark did in the original post here and Antendren has done.
[[Consequently, the notion of an indicator function mapping to {0, 1} doesn't always hold for the sets given.]
Good thing I didn't say that. I said [0,1], not {0,1}. ]
So, do you claim the indicator function of crisp set theory can map to [0, 1]? I don't see how you can claim that, as the definition of an indicator function comes as that it maps to {0, 1}. I admit you did call such just a "function" and not an indicator function. But, what you must realize comes as that in fuzzy set theory that function tells something about the degree of belonging of an element to a reference set. Does your function indicate something about the "degree of belonging" concept, and where can you get that concept... I repeat the concept of "degree of belonging"... from the assumptions of crisp set theory?
[Is the html not going through correctly?]
Yea, it's the html. Or was... it looks fine now. O.K. going back... the +1 and the +2 additon come as exactly the +0 addition when we restrict our function to {0, 1} always. It doesn't work the other way around when we work with
[0, 1]. So, fuzzy set theory ends up more general for the additions given.
I suppose you can define max-product addition and the like using ZFC as above, but taking the case above you have 3 different types of addition. We only need 2 types of addition to cover the same information using just fuzzy set theory, since +0 works as a special case of +1 and +2 when we restrict our function to {0, 1}. In this respect, fuzzy set theory comes as simpler, although admittedly one always uses +0 in all types of fuzzy addition... so in practice they work out as equivalently complicated. And of course, if we deal with JUST crisp information, fuzzy set theory comes as more complicated in many respects.
[Consequently, ZFC says the exact same thing about 2+2 in this case. This set is not 2. 2 in ZFC is 2. This is {(2,1)}. Surely you see that they are different.]
Where did I claim the set was 2? {2} comes a shorthand for {(2, 1)} in crisp set theory. This won't work as the case in fuzzy set theory. {2} comes as a shorthand for
{2, x}, where x indicates a number in (0, 1] such that it equals a number greater than our tolerance level. In other words in fuzzy set theory, if one used a similar notation to crisp set theory, {2} may indicate {(2, .8)} or {(2, .9)} if say our tolerance level came as .7. In crisp set theory, of course, {2} always indicates {(2, 1)} and nothing else.
[Just like the real number 2 (as built by Dedekind cuts) is not the same as the rational number 2 (an ordered pair (2,1)) is not the same as the natural number 2 (which is simply 2).]
If you work strictly from definitions sure, but if you work from consequences I don't see how they differ. Interestingly... I don't think you need Dedekind cuts for real numbers or even set theory... you really just need geometry and magnitudes. Direction can give us negative numbers, calculuable proportion gives us rational numbers, incommensurable magntitudes give us irrational numbers, and magnitudes using shapes like circles gives us the rest of the reals. Of course that doesn't give us complex numbers though and we may find it difficult.
[You're identifying objects in ZFC with their corresponding object in the implementation of fuzzy sets...]
If we can't identify the objects in ZFC with their corresponding objects in the implementation of fuzzy sets, how does ZFC form a foundation or basis for fuzzy sets? I see your statement as saying that the objects in ZFC DIFFER from the corresponding objects in fuzzy set theory. So, where do the objects in fuzzy set theory come from? Doesn't that imply that fuzzy set theory deals with something different than ZFC... that it has a different content? It still remains that the operations in fuzzy set theory DO model the crisp set theory operations when they get confined to {0, 1} and that the converse doesn't hold. Consequently, I can recover, at least, the results of crisp set theory operations using JUST fuzzy set operations on {0, 1}. Can one recover fuzzy sets using crisp set operations on [0, 1]? Again, the examples say no.
[Again, you're claiming that ZFC believes that two different operations, which happen to have "addition" in their title, are the same. There is no basis for this other this.]
They always work out the same when restricted to {0, 1}. The names +0 and +1 on {0, 1} become no more than mere names which operate the same. In other words, there exists no distinction in terms of results between them. Doesn't the identity of indiscernibles principle (that's a basis) imply they work out as identical on {0, 1}? In other words, in every particular case of addition on {0, 1} with x, y we have
x(+1)y=z and x(+2)y=z.
By the transitive property of equality, we have x(+1)y=x(+2)y. Well x=x and y=y by identity, so we only have +1 and +2 which look like they differ, but since x and y don't differ and, x(+1)y=z and x(+2)y=z, +1=+2 on {0, 1}. What basis don't I have here... the transitive property of equality? That x=x? That x(+1)y=z on {0, 1} and x(+2)y=z on {0, 1}? I simply don't see how I lack a basis. In fact, it seems rock-solid here at least.
[Fuzzy complement is not the same as strict complement, the fact that they have a word in common notwithstanding.]
How do they differ when we restrict our function to
{0, 1}? If we have a crisp set like {a, b} with reference set {a, b, d, f}, then the fuzzy complement says we have
{d, f} as our complement, since we really have
{(a, 1), (b, 1), (d, 0), (f, 0)} which has complement of
{(a, 0), (b, 0), (d, 1), (f, 1)} or simply {d, f} using any sort of fuzzy complement whatsoever. Of course, if we have {(a, 1), (y, .6)} the standard fuzzy complement c(a)=1-a says we have {(a, 0), (y, .4)}. How can the crisp complement compute this or allow for it?
[There is no justification for claiming that statements about one should also hold true of the other.]
If one wants to make fuzzy set theory a generalization of crisp set theory, then there exists a reason to claim that fuzzy sets will behave like crisp sets when restricted to {0, 1}. Everything on fuzzy set theory I've ever read takes it for granted basically that fuzzy set theory does generalize or will generalize crisp set theory, or that we can view it that way. Know of any counter-examples?
[My same bait-and-switch objection.]
One starts in fuzzy set theory with [0, 1]. How do I switch things up by restricting things to {0, 1}? Isn't
{0, 1} a subset of [0, 1] before I even start writing? Aren't the properties of crisp set theory already there, or derivable, on {0, 1} before I start writing also?
Again, there exists a basis for claim fuzzy set theory will act like crisp set theory for a certain subset of itself. There exists a basis that fuzzy complement, intersection, union no matter what operators one uses for them will behave just like crisp set operators for the subset {0, 1} of the set [0, 1]. There exists a basis for claiming crisp set theory as a special case of fuzzy set theory. There exists a basis that a union-intersection fuzzy operator behaves like regular addition on {0, 1}, while the normal addition operator +0 requires alpha-cuts and interval arithmetic for continuous fuzzy numbers instead of the extension principle. In solving fuzzy equations, Buckley and Eslamin in _An Introduction to Fuzzy Logic and Fuzzy Sets_, as well in their papers, and the book _Fuzzy Mathematics in Economics and Engineering_ show that the extension principle vs. alpha-cuts and interval arithmetic don't always yield the same solution with continuous fuzzy numbers. I do see how fuzzy set theory developed an extension principle and exploited it. Did crisp set theory do the same... or could it have done so?
I neither need to nor do I intend to, my claim is completely supported by the remarks that Stephan and Antendren have already repeatedly made and which you blithely continue to ignore.
[I neither need to nor do I intend to, my claim is completely supported by the remarks that Stephan and Antendren have already repeatedly made and which you blithely continue to ignore.]
Suit yourself by all means. The truism about opinions stands.
One thing remains radiantly clear throughout all of this. The idea of indicator functions for members in crisp set theory which map to {0, 1} comes as a special case of the idea of membership functions of fuzzy set theory which map to [0, 1]. Now, if crisp set theory really grounds fuzzy set theory, how does a specialized theory form a basis for a more generalized theory? How does a theory with less information form a proper basis for a theory with more information?
No one here has answered that last question.
How does crisp set theory lead to a three-valued logic? In Zadeh's original paper, he showed how one could use fuzzy sets to lead to a three-valued logic. He also noted that the notion of "belonging" doesn't really play a significant role in fuzzy set theory as it does in crisp set theory. Does three-valued or multiple-valued logic not qualify as part of mathematics? Why or why not?
If nothing more and you haven't found it already, here's Zadeh's original paper (although perhaps Max Black's paper on "vauge sets" was better).
http://www-bisc.cs.berkeley.edu/zadeh/papers/Fuzzy%20Sets-1965.pdf
Not if it is you who makes claims against common understanding. Which I believe Stephan et al have defined clearly. You have the burden of proof, as Thony describes.
[Note that I'm not going into the specific debate, as I haven't had the energy to follow it in detail, nor the expertise to discuss it. That isn't why I reacted to your assertion above.]
I had a discussion on the Stanford campus with L. Zadeh about multivalue logic, fuzzy logic, and Artificial Intelligence. I don't remember the details -- this was roughly 20 years ago -- but I don't recall any foundation difficulty along the "vauge" [sic] lines that Doug mentions.
Category Theory, and the n-Category Cafe [blog] folks point out that their approach to pure Math and to Physics does NOT depend on Set Thory. They sneer at set theory, but not for the reasons that Wildberger does.
My wife and I actually contemplated using Wildberger's text on Trigonometry for a class, partly on the basis that my wife's Ph.D. was earned at the same UNSW (University of New South Wales, Sydney, Australia).
[The truism about opinions stands.
Not if it is you who makes claims against common understanding.]
No. A truism remains a truism no matter who states, as long as truth remains independent of personal influence. I don't think you mean that people influence truth, do you? As an example... which isn't quite a truism but still works well enough... like let's say a geocentrist says "I never have seen the Earth moving with my own eyes"... that person FOR THAT STATEMENT ALONE states a truth. This doesn't mean that any perceived or real implication of such comes as true. Here, I would say I have the same experience as the geocentrism and even if he had said "No one has seen the Earth moving with their own eyes" in 1950 I would say his statement as it stood worked out as true, even though I would disagree with his implication "therefore the Earth doesn't move."
[You have the burden of proof, as Thony describes.]
Thinking about this a bit more and searching around I suppose I make one assumption I haven't shown, which was _suggested_ to me by this "Set theory is replaced by the theory of characteristic functions or " indicators". in Chapter 4, while the definition of sets of measure zero and the ...
blms.oxfordjournals.org/cgi/reprint/4/2/254.pdf - Similar pages" http://www.google.com/search?hl=en&q=characteristic+function+set+theory…
I don't have access to the journal, nor have I had luck finding that information in more detail elsewhere. I basically take it for granted that one can replace ZFC (or ZBG for that matter) by a theory of indicator functions on {0, 1}... after all a common definition of crisp sets comes as "a collection of distinct, well-defined objects." Indicator functions make those objects well-defined, and distinct... as if we have the same set of indicator functions at work for the exact same elements we have indiscernable sets. Even though that doesn't define ZFC sets, as I see it, ALL ZFC sets satisfy that definition. Even Antendren's example above of a function in [0, 1] still works out as a well-defined, distinct collection of objects, (unlike how if he had chosen to introudce a "degrees of membership" concept it wouldn't have worked out that way due to semantical meaning.). Additonaly, indicator functions tell us everything about belonging/membership in crisp set theory as I understand it, which comes as inestimably important in set theory. So, even if I don't have crisp set theory, I have something extremely similar (unless that membership idea isn't so important).
Taking that as the basis, indicator functions have basically the same (if not identitical) properties of classical logic for its operators. Those properties don't just work as consequences, in my view. If one takes those properties as axiomatic, and says "what consists of the very simplest possibility that satisfies all of these properties?" we will get the indicator function or something structurally the same as the indicator function. If there does exist a simpler structure than the indicator function, then I've affirmed the consequent and my reasoning becomes abductive at best.
So, to show that crisp sets... or at least a theory of indicator functions... AT LEAST, in terms of its operators come as a special case of fuzzy sets, I only need to refer the axiom list I gave above. Since every fuzzy set satisfies those axioms, when they get confined to {0, 1}, as I will show, and those axioms (if they had shown/written up correctly!) yield the same properties that crisp sets do, then fuzzy set theory behaves just like crisp set theory... or at least a theory of indicator functions if that REALLY does differ from ZFC or ZBG... on {0, 1}. Again, this doesn't work as mere similarity, since the indicator function comes as the very simplest way to generate those properties.
I'll first restate my axioms fully and in words in case something gets swallowed:
1st axiom: 0 is less than or equal to a and b which are less than or equal to 1, and the intersection, union, and complement of (a, b) are in [0, 1]. Symbolically, i(a, b), u(a, b), c(a) are in [0, 1].
2nd axiom: i(1, 1)=1 or the intersection of a membership function which equals 1, and another membership function which has the same member and equals 1, equals 1.
3rd axiom: i(0, 1)=i(1, 0)=i(0, 0)=0
4th axiom: u(0, 0)=0
5th axiom: u(0, 1)=u(1, 0)=u(1, 1)=1
6th axiom: c(0)=1, c(1)=0.
We've really added a seventh assumption... but again its different since it comes as restriction for just the investigation of crisp sets. The other axioms don't work as a restriction that makes crisp sets, but rather hold for all fuzzy sets.
7th axiom: a and b are both in {0, 1}.
Proposition 1: c(c(a))=a
Proof: We only need to check two cases.
Let a=0. Then c(c(a))=c(c(0))=c(1)=0=a by definition and axiom 6.
Let a=1. Then c(c(a))=c(c(1))=c(0)=1=a again by definition and axiom 6.
Proposition 2: u(a, a)=a, i(a, a)=a.
Proof:
Let a=0. By axiom 4 we have u(0, 0)=0, by axiom 3 we have i(0, 0)=0.
Let a=1. By axiom 5 we have u(1, 1)=1. By axiom 2 we have i(1, 1)=1. So idempotency holds for all fuzzy sets on
{0, 1}.
Proposition 3: u(a, b)=u(b, a) (equivalently,
a u b= b u a... I'll maintain the first notation from now on.) i(a, b)=i(b, a)
Proof:
Case 1: By proposition 2 we have the (0, 0) and (1, 1) cases as valid, so we need just check the (0, 1) and the (1, 0) cases.
u(0, 1)=u(1, 0)=1 by axiom 5.
i(0, 1)=i(1, 0)=0 by axiom 3.
Proposition 4: u(a, u(b, c))=u(u(a, b), c),
i(a, i(b, c))=i(i(a, b), c)
Proof:
Case (0, 0, 0):
u(0, u(0, 0))=u(0, 0)=0 u(u(0, 0), 0)=u(0, 0)=0 ax. 4
i(0, i(0, 0))=i(0, 0)=0 i(i(0, 0), 0)=i(0, 0)=0 ax. 3.
Case (0, 0, 1):
u(0, u(0, 1))=u(0, 1)=1 u(u(0, 0), 1)=u(0, 1)=1 ax. 4, 5
i(0, i(0, 1))=i(0, 0)=0 i(i(0, 0), 1)=i(0, 1)=0 ax. 2, 3
Case (0, 1, 0):
u(0, u(1, 0))=u(0, 1)=1 u(u(0, 1), 0)=u(1, 0)=1 ax. 4, 5
i(0, i(1, 0))=i(0, 0)=0 i(i(0, 1), 0)=i(0, 1)=0 ax. 2, 3
Case (0, 1, 1):
u(0, u(1, 1))=u(0, 1)=1 u(u(0, 1), 1)=u(1, 1)=1 ax. 4, 5
i(0, i(1, 1))=i(0, 1)=0 i(i(0, 1), 1)=i(0, 1)=0 ax. 2, 3
Case (1, 0, 0):
u(1, u(0, 0))=u(1, 0)=1 u(u(1, 0), 0)=u(1, 0)=1 ax. 4, 5
i(1, i(0, 0))=i(1, 0)=0 i(i(1, 0), 0)=i(0, 1)=0 ax. 2, 3
Case (1, 0, 1):
u(1, u(0, 1))=u(1, 1)=1 u(u(1, 0), 1)=u(1, 1)=1 ax. 4, 5
i(1, i(0, 1))=i(1, 0)=0 i(i(1, 0), 1)=i(0, 1)=0 ax. 2, 3
Case (1, 1, 0):
u(1, u(1, 0))=u(1, 1)=1 u(u(1, 1), 0)=u(1, 0)=1 ax. 4, 5
i(1, i(1, 0))=i(1, 0)=0 i(i(1, 1), 0)=i(1, 0)=0 ax. 2, 3
Case (1, 1, 1):
u(1, u(1, 1))=u(1, 1)=1 u(u(1, 1), 1)=u(1, 1)=1 ax. 4
i(1, i(1, 1))=i(1, 1)=1 i(i(1, 1), 1)=i(1, 1)=1 ax. 2
Proposition 5: i(a, u(b, c))=u(i(a, b), i(a, c))
u(a, i(b, c))=i(u(a, b), u(a, c))
Proof:
Case (0, 0, 0):
i(0, u(0, 0))=i(0, 0)=0 u(i(0, 0), i(0, 0))=u(0, 0)=0
u(0, i(0, 0))=u(0, 0)=0 i(u(0, 0), u(0, 0))=i(0, 0)=0
Case (0, 0, 1):
i(0, u(0, 1))=i(0, 1)=0 u(i(0, 0), i(0, 1))=u(0, 0)=0
u(0, i(0, 1))=u(0, 0)=0 i(u(0, 0), u(0, 1))=i(0, 1)=0
Case (0, 1, 0):
i(0, u(1, 0))=i(0, 1)=0 u(i(0, 1), i(0, 0))=u(0, 0)=0
u(0, i(1, 0))=u(0, 0)=0 i(u(0, 1), u(0, 0))=i(1, 0)=0
Case (0, 1, 1):
i(0, u(1, 1))=i(0, 1)=0 u(i(0, 1), i(0, 1))=u(0, 0)=0
u(0, i(1, 1))=u(0, 1)=1 i(u(0, 1), u(0, 1))=i(1, 1)=1
Case (1, 0, 0):
i(1, u(0, 0))=i(1, 0)=0 u(i(1, 0), i(1, 0))=u(0, 0)=0
u(1, i(0, 0))=u(1, 0)=1 i(u(1, 0), u(1, 0))=i(1, 1)=1
Case (1, 0, 1):
i(1, u(0, 1))=i(1, 1)=1 u(i(1, 0), i(1, 1))=u(0, 1)=1
u(1, i(0, 1))=u(1, 0)=1 i(u(1, 0), u(1, 1))=i(1, 1)=1
Case (1, 1, 0):
i(1, u(1, 0))=i(1, 1)=1 u(i(1, 1), i(1, 0))=u(1, 0)=1
u(1, i(1, 0))=u(1, 1)=1 i(u(1, 1), u(1, 0))=i(1, 1)=1
Case (1, 1, 1):
i(1, u(1, 1))=i(1, 1)=1 u(i(1, 1), i(1, 1))=u(1, 1)=1
u(1, i(1, 1))=u(1, 1)=1 i(u(1, 1), u(1, 1))=i(1, 1)=1
Proposition 6: i(a, c(a))=0, u(a, c(a))=1.
Proof:
Let a=0: i(0, c(0))=i(0, 1)=0 u(0, c(0))=u(0, 1)=1
Let a=1: i(1, c(1))=i(1, 0)=0 u(1, c(1))=u(1, 0)=1
Proposition 7: c(u(a, b))=i(c(a), c(b))
c(i(a, b))=u(c(a), c(b))
Case (0, 0):
c(u(0, 0))=c(0)=1 i(c(0), c(0))=i(1, 1)=1
c(i(0, 0))=c(0)=1 u(c(0), c(0))=u(1, 1)=1
Case (0, 1):
c(u(0, 1))=c(1)=0 i(c(0), c(1))=i(1, 0)=0
c(i(0, 1))=c(0)=1 u(c(0), c(1))=u(1, 0)=1
Case (1, 0):
c(u(1, 0))=c(1)=0 i(c(1), c(0))=i(0, 1)=0
c(i(1, 0))=c(0)=1 u(c(1), c(0))=u(0, 1)=1
Case (1, 1):
c(u(1, 1))=c(1)=0 i(c(1), c(1))=i(0, 0)=0
c(i(1, 1))=c(1)=0 u(c(1), c(1))=u(0, 0)=0
Proposition 8: u(a, 0)=a, i(a, 0)=0
u(a, 1)=1, i(a, 1)=a
Proof:
Let a=0, then u(0, 0)=0, i(0, 0)=0
Let a=1, then u(1, 1)=1, i(1, 1)=1
Proposition 9: u(a, i(a, b))=a
i(a, u(a, b))=a
Proof:
Case (0, 0): u(0, i(0, 0))=u(0, 0)=0
i(0, u(0, 0))=i(0, 0)=0
Case (0, 1): u(0, i(0, 1))=u(0, 0)=0
i(0, u(0, 1))=i(0, 1)=0
Case (1, 0): u(1, i(1, 0))=u(1, 0)=1
i(1, u(1, 0))=i(1, 1)=1
Case (1, 1): u(1, i(1, 1))=u(1, 1)=1
i(1, u(1, 1))=i(1, 1)=1
So, at bare minimum I can generate all the basic properties of the indicator function (and that of propositional logic) from the 6 assumptions always used for fuzzy sets, as well as the 7th restrictive assumption. In other words, I have the behavior of the indicator function. Again, I claim and maintain that the indicator function is the very simplest way to generate these properties, so I think that if any normal enough human finds these axioms they'll find the indicator function eventually. If they have the indicator function, I think they have basically a theory of indicator functions, which I consider as sufficeint if not throughly coextensive with whatever crisp set theory... although perhaps I need one more concept. A class of sets which all map to 0 for the indicator function is the empty set, and a class of sets which map to 1 and have all members of our reference set is the reference or universal set. But notice that even though I don't have the notion of class in my basis, the key operator here comes as the indicator function which I can basically generate or, at least, abduce from the basic properties above if I searched for the simplest structure which satisfies them. Maybe, I've done that. Maybe I've reasoned here abductively instead of deductively. Maybe that doesn't constitute mathematical proof. But, it doesn't mean that I've done bad math exactly... in other words I haven't made a mathematical mistake... nor necessarily a logical error if I admit that I reason abductively here. Sure, such works out as invalid in classical two-valued logic, but it doesn't work that way in abductive logic nor necessarily in a logic based on three or more values... and getting mathematical information requires more than classical logic, at least in its inital stages.
[I had a discussion on the Stanford campus with L. Zadeh about multivalue logic, fuzzy logic, and Artificial Intelligence. I don't remember the details -- this was roughly 20 years ago -- but I don't recall any foundation difficulty along the "vauge" [sic] lines that Doug mentions.]
Zadeh has warned against too much precision in fields like Artifical Intelligence. A quote I especially like from Zadeh on this is "As complexity rises, precise statements lose meaning, and meaningful statements lose precision. -- Lotfi Zadeh
I certainly don't think the foundations of mathematics come as simple to understand and think up. After all, only a few highly educated people understand them, and mainly on in their head which certainly doesn't work as simple.
[Category Theory, and the n-Category Cafe [blog] folks point out that their approach to pure Math and to Physics does NOT depend on Set Thory.]
I've read this briefly too. On this... can one derive category theory from ZFC/ZGB set theory? Category theory qualifies as mathematics, so if not there exists math outside of axiomatic set theory.
As a final note I don't "sneer" at set theory for the same reason Wildberger does either. I "sneer" at it, because it at least behaves too much like two-valued logic, and its concomitants. Two-valued logic would be fine if the people who used it realized how narrow and small of a system it works out as. Good luck convincing people of that though... especially some mathematicians!
I just want to repeat my major objection that I presented last time this topic came up.
Doug, can you implement a program that models fuzzy logic, allowing you to work with theorems and such? If you can't, personally, do you see that it is possible? (It is, of course, possible, because I've done it for a class assignment before.)
Computers are built on binary logic, which is two-valued. Everything that goes on in the computer is shoved through layers of interpretation until it is nothing more than a vast number of binary logic operations.
Thus, fuzzy logic can be implemented within two-valued logic.
In a similar way (the analogy isn't quite so clear, but Ant has made a good effort at defining it already), fuzzy set theory can be built within crisp set theory.
You keep, over and over and over again, assuming that for a system to be possible, it must be handled at the most basic level. This ignores the entire virtualization argument, where you can build a language on top of another one that has different capabilities. As long as you aren't trying to achieve something that the lower language can't theoretically handle (ie building a turing machine on top of a finite state machine), you're fine. Since the vast majority of possible computational models are turing-equivalent (including ZFC, when interpreted properly as a computational model), you're just fine.
Unless you're willing to assert that fuzzy set theory requires a super-Turing machine to properly implement?
[Doug, can you implement a program that models fuzzy logic, allowing you to work with theorems and such?]
Personally, no. I don't know computer programming.
[If you can't, personally, do you see that it is possible?]
Sure. But, this doesn't mean I've modelled all of the theory in its generality. For instance I might model the Zadeh or (max, min, 1-a) fuzzy logic. I've got something significant going there, but I haven't modelled the whole theory which satisfies the axioms above. I've particularized the operators for the program. So, a computer programmer can model a fuzzy logic, but well... what did your computer program involve? Did you particularize opeators? If not, how did you model the whole theory... or stated differently a unifying theory of all particular fuzzy logics?
[Computers are built on binary logic, which is two-valued.]
Most are. But, there do exist multi-valued computers and fuzzy computers. Here's one example I found looking around http://www.patentstorm.us/patents/5295226.html
A Japanese researcher wrote a book that translates as _The Concept of a Fuzzy Computer_ a few years ago. There does exist such a thing as fuzzy hardware, as you can see here:
http://www.aptronix.com/company.htm. Seriously, almost everyone knows that analog computers exist... so I don't see what you've tried to pass off here. Lastly, some would argue that the very point of fuzzy logic/set theory comes as forming a proper linguistic calculus. As Zadeh has said "''You can compute with words instead of computing with numbers,''" http://query.nytimes.com/gst/fullpage.html?res=950DE5DF133EF931A35757C0…
Specifically with words like "If the temperature is very cold, then put rather heavy clothing on." The terms "very cold" and "rather heavy" don't qualify as precise, nor really two-valued, since there exists no thresholds for terms like "very cold" or "rather heavy", but rather a gradual transition between states. This becomes especially noticeable when we talk about such numerically.
[Thus, fuzzy logic can be implemented within two-valued logic.]
A particularization of the theory, but not something like a system of theories of fuzzy logic, as I understand it. Second, even if so... can a two-valued computer do the computations needed for fuzzy logic FAST enough? In a lecture by Zadeh on his site you can hear that he talks about how DNA computing or quantum computing may work as needed for fuzzy logic to really "take off". As I understand those possible fields of computing, they don't depend on two-valued logic completely.
[In a similar way (the analogy isn't quite so clear, but Ant has made a good effort at defining it already), fuzzy set theory can be built within crisp set theory.]
Looking back at what Antendren wrote he says "I now define a fuzzy set to be a function f with range contained in [0,1]." Sure, he can do that. I don't think he has fuzzy set theory at its basis. In other words, the analogy isn't so clear here. That function may assign a number in [0, 1], but does it assign a degree of membership for the element involved? Does it assign a grade of membership for the element involved? Sure, numerically it looks fine, but the concept of graded membership comes as the key idea. I still don't think Antendren's idea has this conceptually nor really approaches it, although he's done more than I expected he would. Also, the idea of graded membership itself almost implicitly leads to a multi-valued logic of some sort. I don't see a function in [0, 1] doing that UNTIL we assign the meaning of "somewhat, but not entirely belonging" or "somewhat, but not entirely true" or something similar for that function.
More significantly perhaps I don't see how Antendren's function gives imprecise boundaries to sets. It gives them a number, but they all still have characteristic functions as their MAIN characteristic. Characteristic functions endow all sets with precise boundaries. Maybe Antendren's function does something at some purely numerical or otherwise technical level, but I don't see how it works out semantically whatsoever.
[This ignores the entire virtualization argument, where you can build a language on top of another one that has different capabilities.]
I'm not familar with this argument. How does it work?
[Unless you're willing to assert that fuzzy set theory requires a super-Turing machine to properly implement?]
Honestly, maybe I shouldn't bother, as this almost now looks like a deliberate attempt at mischaracterization... or a straw-man argument. Turing machines divided into cells each containing a symbol from some finite alphabet. Right there causes a disconnect between my argument and Turing machines. The alphabet [0, 1], as well as some of its "symbols" from [0, 1], don't work as finite in numerical terms, since irrational numbers exist in that set. So, I couldn't set up a Turing machine to numerically simulate all of fuzzy logic. The number of "states" drawn from [0, 1] isn't finite. As Wikipedia says "Note that every part of the machine--its state- and symbol-collections--and its actions--printing, erasing and tape motion--is finite, discrete and distinguishable; it is the potentially unlimited amount of tape that gives it an unbounded amount of storage space." http://en.wikipedia.org/wiki/Turing_machine
Yea... I don't know anyone who would claim that a "Super-Turing machine" would or even could simulate a theory about sets with imprecise boundaries at its core. Again, I feel amiss about what you've said here, just as I feel amiss about you talking about (all... my reading of your argument's meaning) computers working from binary logic, when analog, fuzzy, and quantum computers exist.
[Fuzzy complement is not the same as strict complement, the fact that they have a word in common notwithstanding.]
Again, you can call this mere "word play", however some things remain certain enough. Complements on [0, 1] behave exactly the same as crisp complements when restricted to
{0, 1}. Second, negations in classical propositional logic can get built entirely out of the ~T=F and ~F=T axiom or ~1=0, ~0=1 axioms if you prefer. From these axioms and axioms for intersections and unions properties like the principle of contradiction and excluded middle follow directly as theorems. Crisp set theory has first-order logic as its background logic, so it assumes principles like that of contradiction and excluded middle as logical theorems for ALL truth values. So, crisp set theory doesn't have any ability to study combinations of truth values where these don't hold (the function above only worked as a number without a conceptual structure such as "degree of membership.") In multi-valued and fuzzy logics these principles don't work out as theorems for all truth values. If one assumes them as logical theorems, then one can come up with contradictions as I, at least, did above for logical idempotency in fuzzy logic in the narrow sense (not fuzzy set theory, just fuzzy logic). So, if crisp set theory does somehow exist as a basis for all of mathematics, then there exists parts of logic that mathematics simply can't study. If logic works as a basis for mathematics, then one can assume a different structure for mathematics and get different mathematics.
Supposing, there to exist parts of logic that mathematics simply can't study would come out as significant, since rules of approximate reasoning like
"If the temperature is high,
and the volume is large,
then increase pressure to a very high amount," rely on a fuzzy logical structure, which by the above mathematics can't really study without violating its own logical assumptions. Without these rules fuzzy control would never have happened, and a large amount, if not all, of fuzzy applications would go out.
I hope my post replying to Xanthir posts, since I simply don't get why he wants to pass off ALL computers as relying on binary logic, when fuzzy hardware, fuzzy computers, multivalued computers, quantum computers, and analog computers do exist and come as useful in certain contexts. I also don't see how modelling a part of fuzzy theory like using the (min, max, 1-a) or Zadeh logic on a computer (as I suspect he did) really works out like modelling the entire structure of fuzzy theory... let alone the "super-Turing machine".
Xanthir's question is good. Are you, Doug, saying that Fuzzy Sets demlish the Church-Turing thesis? That NOT every effective computation can be carried out by a Turing machine?
I do agree that, to some extent, "one can assume a different structure for mathematics and get different mathematics." But how does that imply that "exist parts of logic that mathematics simply can't study"? Can't? Not any future humans, any aliens, and quantum computers? If not, why not. If so, please explain your radical vision.
[you can call this mere "word play"]
It is mere word play. They are different things. There's no reason to think that things which hold about one hold about the other.
By way of analogy, let me construct the integers in the natural numbers. + will be the standard addition in the naturals, while +Z will be addition in the integers.
The natural 0 will represent the integer 0.
Natural numbers of the form 2n+1 will represent the integer n+1.
Natural numbers of the form 2n will represent the integer -n.
Then I define +Z as follows:
(2n+1) +Z (2m+1) = 2(n+m+1) + 1
(2n) +Z (2m) = 2(m+n)
(2n + 1) +Z (2m) = 2(m - n - 1) if m > n
(2n + 1) +Z (2m) = 2(n - m) + 1 otherwise.
Now, the following is a theorem of the natural numbers:
∀x ∀y x + y = 0 ⇒ x = y = 0.
On the other hand, clearly this is not a theorem of the integers. For example, 1 +Z 2 = 0. By your previous reasoning, this means that I can't build the integers in the naturals. But clearly I just did.
Doug:
I fail to see that you addressed the substance of my comment.
Jonathan,
[Are you, Doug, saying that Fuzzy Sets demlish the Church-Turing thesis? That NOT every effective computation can be carried out by a Turing machine?]
I don't know much about this, but the notion of an effective computation doesn't work as sufficient for predicate calculus, but does work as sufficient for two-valued propositional calculus as I understand it. "The truth table test is such a method for the propositional calculus. Turing showed that, given his thesis, there can be no such method for the predicate calculus. He proved formally that there is no Turing machine which can determine, in a finite number of steps, whether or not any given formula of the predicate calculus is a theorem of the calculus."
http://www.alanturing.net/turing_archive/pages/
Reference%20Articles/
The%20Turing-Church%20Thesis.html
But, if one talks about three-valued propositional calculus or an infinite-valued propositional calculus, things work may work differently. I can't say I refute such a thesis, as I don't feel comfortable in saying I understand it, but let's say we have statement A which has truth value of .5 and statement B with truth value of .499999999 (with 9s on forever). In a real-number valued propositonal calculus, we'd call these statements logically equivalent. But, if we used a propositional calculus like a neutrosophic propositional calculus, then we wouldn't as there exists an infinitesimal difference between the two. But, in such a calculus one could take the shadow of their truth values and still claim them as equivalent in some sense. So, things might work differently here.
[I do agree that, to some extent, "one can assume a different structure for mathematics and get different mathematics." But how does that imply that "exist parts of logic that mathematics simply can't study"?]
I didn't say that. I said "So, if crisp set theory does somehow exist as a basis for all of mathematics, then there exists parts of logic that mathematics simply can't study." If one assumes a suitably different structure than crisp set theory, then mathematics can study basically all of logic so far as I know. That's my point... one has to entertain the idea of other logics than two-valued logic underlying mathematics, if one wants to use mathematics to study something like multi-valued logic, otherwise contradictions arise, which don't exist exist and can't exist according to the assumptions of a two-valued logic.
Antendren,
[[you can call this mere "word play"]
It is mere word play. They are different things. There's no reason to think that things which hold about one hold about the other. By way of analogy, let me construct the integers in the natural numbers. + will be the standard addition in the naturals, while +Z will be addition in the integers. On the other hand, clearly this is not a theorem of the integers.]
As I see it, then your "building" of crisp set theory will have different theorems than that of fuzzy set theory. It will have theorems which don't hold in fuzzy set theory, but will do so in the space where it claims to emulate fuzzy set theory, as your natural numbers have a theorem
So, then something like the principle of contradiction, or idempotency ONLY holds for two-valued logic. In such a case, there exists logics without these principles holding. Consequently, assuming such a logic as two-valued logic as holding doens't work out as consistent with everything it supposedly derives. The two-valued logic at base has different rules than a three-valued logic which supposedly comes from it. In other words, the base logic doesn't work out as universally applicable, it has a limited domain, just as how the integers come out as having a limited domain. One can't properly apply a two-valued theory to a multi-valued theory since they have different logics at work. Case in point: one can't apply the logical structure of crisp set theory to that of fuzzy set theory since they have different logics at work... at least without inconsistencies, just like how one can't apply the structure of the natural numbers to that of the integers without problems.
[Natural numbers of the form 2n+1 will represent the integer n+1.
Natural numbers of the form 2n will represent the integer -n.]
I think you've equivocated on your symbols here actually. (2n+1)=(n+1) and (2n)=(-n). Consequently, we have (2n+1)=(-n+1) by simple substitution. Then we have (2n+1)=(-n+1)=(n+1), by which we have -n=n, which doesn't hold for the integers nor the naturals. Maybe that doesn't work as a problem for what you've written though.
I don't see how you've defined '-' in the set of natural numbers alone, and I don't see how anyone can do so, taking '-' as a closed operation, as we usual do. After all 3-5 doesn't happen in the natural numbers, and consequently the '-' symbol becomes nonsensical for assumptions about all natural numbers considered as a closed set. So, really I don't see your '-' as a valid closed operation for JUST the set of natural numbers, unless you want to specify that a>b for operating with '-'. Of course, this adds an axiom to the '-' operation, so you've worked with something richer in structure (or more complicated) than the usual '-'.
Wait... it's worse. You defined 2n as the integer '-n'. You don't have the integers to start with. You ONLY have natural numbers. You already have the concept of '-' at work as a unary operator on a natural number. Of course, from a natural number '-' as a unary operator yields something other than a natural number, so you don't have a '-n' in the natural numbers... no closure. Defining '-n' comes as already working out of your base set. In other words, you haven't gotten the unary operator '-' from the base set of the integers, or can define it purely in your base set of the integers in a way that it has closure.
[Natural numbers of the form 2n+1 will represent the integer n+1.
Natural numbers of the form 2n will represent the integer -n.]
So, let n=m=1. Then,
2(1+1+1)+1=7 in the natural numbers. 3+z+3=7 in the integers also. Of course, 7 doesn't equal 6 in the integers. So, I don't see how you've constructed the integers here.
I don't think you've constructed the concept of integers fully from the natural numbers actually in a more serious way actually. An integer, when graphed on a number line, has an inverse which lies equidistant when REFLECTED about the origin. There exists nothing in the algebraic axioms for integers usually, nor these ones, which says that something like that geometrical property of reflection will hold, as I undersand it. You may have, of course, constructed a structure consistent with algebraic axioms used to "look" at integers algebriacally, but I don't see how your structure will necessarily imply something about a reflection... a movement... around an origin in geometry. So, I see a massive conceptual mismatch between your algebra here and commonly understood ideas about integers. This doens't mean you don't have any real information about the integers. It does however, imply that you have only some of the basic conceptual information about the integers in your modelling of them.
Torbjorn Larsson,
[I fail to see that you addressed the substance of my comment.]
Thinking more on your comment:
[Not if it is you who makes claims against common understanding. Which I believe Stephan et al have defined clearly. You have the burden of proof, as Thony describes.]
I only really have a burden of proof if I make a positive claim. Even if my statmeents go against "common understanding", if "common understanding" makes the claim, then the "commoners" have the burden of proof. My original statement with reference to fuzzy set theory coming as an example of how crisp set theory doesn't cover everything, comes as a negative claim: crisp set theory does NOT cover all of mathematics. The burden of proof, here, lies on the crisp set theorists to show that they can literally derive all of mathematics from crisp set theory. My examples just serve to point out the futility of this... or if they can do so, the insufficiency of such a mathematics since crisp set theory can't analyze multi-valued or fuzzy logic without violating its own hypotheses. Again, my claim comes as negative... so I don't see how the burden of proof really falls on my shoulders, although as illustrated above I repeatedly try to put forth a lot of information here.
Okay, looked up neutrosophic logic. It appears that it utilizes the hyperreals to capture the notion of infinitesimals.
You are correct. Propositional logic coupled with the reals is less powerful than that. Change it so that it is coupled with the hyperreals and alter the axioms appropriately and you would have no problems.
It really seems that most/all of your issues with this, Doug, come from you assuming that there is one particular structure that *must* be followed within the other logics, and then trying to apply assumptions from fuzzy logic to it. This has been said before, but that doesn't work.
All of which are equivalent in computational power (excepting possibly quantum computers, but they're probably equivalent as well). That is, every single one of the computers can simulate the others exactly (though they may be slow).
That's exactly what we're trying to say with regards to crisp/fuzzy logic. Both logics operate on different rules, but you can simulate each from within the other to a perfect degree - both of them have sufficient power to fall under the Church-Turing umbrella and be capable of universal computation.
A mechanical computer doesn't work by passing around high/low charges that represent zeros and ones. Regardless, it can simulate this operation, and thus (slowly) run a digital computer exactly as if it was, itself, digital. If you want to treat it like a digital computer, though, you *must* interact with it on the simulation level - you obviously can't just plug an ethernet cable in somewhere unless you want it to be ground between some gears. ^_^
Same thing applies with crisp/fuzzy logics. You can implement fuzzy logic in crisp logic, but it's not directly - it's done on a simulation layer. You keep trying to apply the base rules of crisp logic to the simulated fuzzy logic, though - in other words, you're jamming the ethernet cable between the gears and wondering why you don't have an internet connection. You must interact with the simulated fuzzy logic using the simulated fuzzy operators, not the basic crisp ZFC operators.
I've tried to stay out of this, because it seems like a dead-end argument that's really not going to go anywhere. But I wanted to jump in on something Xanthir said.
Quantum computers are not any more powerful, in the theory of computation sense, then conventional computers. There is some slim possibility that they might, in certain cases, be able to do some NP stuff, because quantum computing enables a certain kind of limited use of non-determinism. But that's only an issue of relative computational complexity: anything that can be done on a quantum computer can also be done on a simple, standard, digital computer.
WRT the other stuff...
It really is just word games. Doug, you insist that since the term "member of" is defined in crisp set theory, that anything you build with set theory that uses the term "member of" must therefore be using the same "member of" operator as the basic, primitive one.
By that same kind of argument, number theory can't be defined in crisp set theory. There is no "addition" or "subtraction" operator in crisp set theory. There's no such thing as an object "2". I can represent a 2 as a set,
{empty,{empty}}, but that object has properties in crisp set theory that the number 2 does not have. The number 1 is not an element of the number two in number theory. But in set theory, if I use the sets {empty}, and {empty,{empty}} to represent 1 and 2, then 1 is an element of two.
The point is, what we do in set theory is model other things. We define what other things mean by constructing (crisp) sets to represent those other things, and then defining the operations on those constructed things in terms of functions over sets.
So if we wanted to model fuzzy set theory in crisp set theory, we would need to first have a construction of real numbers, and then using those real numbers, we would construct a representation of the fuzzy set theory objects using crisp sets and real numbers, along with a set of functions which define the basic operators of fuzzy set theory in terms of how they operate on our representation of fuzzy sets.
Then, when we use fuzzy sets, we would not use the crisp set theory "union", "subset", "member-of", etc., operations - those are the operations of the primitive substrate that we used to build things. We'd use our constructed operations, which are defined in terms of those primitive operations.
For a similar, but more concrete metaphor, I enjoy programming in a language called Haskell. Haskell is a purely functional language - a language without any assignment operator. But my computer doesn't know what a function is. It can only do computations in terms of a very primitive binary language which is completely defined in terms of mutable state. I can compare two values in Haskell - but Haskell's idea of equality is quite different from the notion of equality built into the hardware of my computer.
How can my computer run Haskell? The non-mutable structures that I work with in Haskell are translated into mutable things in my computer's memory. When I compare two values, it's not just using the hardware comparison operation; it's building a comparison operation from some collection of comparisons and other primitive computations. What runs on the hardware is often not important from my point of view: I'm working in terms of the abstractions of Haskell. So I see lambda expressions and lazy lists; my computer sees bits and bytes, instruction pointers and numbers.
I can get some pretty bizarre results by jumping into the primitive translation of my Haskell program into native binary. The primitive values in the binary don't look much like the abstract objects that my program works with, and if I do things like assignments, comparisons, jumps, etc., in that binary code, then the meaning of my program in Haskell will be compromised, because I'll have broken the abstraction.
Heh, sorry about that Mark. I keep drawing my line wrong with quantum computation. I know that QC can move some NP things into P (we have proven algorithms just waiting for a computer to run them on!), but I keep for some reason interpreting that in my head as being a statement about their membership in the Church-Turing club.
This, despite the fact that I *know* that all the quantum interactions can be modelled just fine in traditional math, and thus run on a normal digital computer. It just shunts the problem back up into a harder computational class, is all.
My brain keeps lying to me!
Xanthir,
[Change it so that it is coupled with the hyperreals and alter the axioms appropriately and you would have no problems.]
If we alter the axioms, we have a different theory.
[That is, every single one of the computers can simulate the others exactly (though they may be slow).]
As I understand it, we've started to talk about theoretical computation. With theoretical computation, we can have infinite loops, or simple programs which go on without stopping at any point. For instance, we can have simple (basic) programs like this:
10 Let n=3
20 Let x=n+2
30 Print x
40 If x=2m, then stop
50 If x=2m+1, then goto 20
Of course, such a program never stops. Now, if one of these infinite programs A works out faster than another infinite program B, then B will never produce all the same results that A has already produced, given that they start at the same point in time. So, I don't see how B can simulate A. Likewise, I think it possible that there exist an infinity of propositions from both classical logic and a system of non-classical logics (I don't accept that all mathematical statements have to get written in a finite alphabet, since we can do maths purely in our heads). If a system based on non-classical logics produces propositions faster than that of one base on classical logic, then the system based on non-classical logics will produce information more information than that based on classcial logic has already produced.
[Both logics operate on different rules, but you can simulate each from within the other to a perfect degree - both of them have sufficient power to fall under the Church-Turing umbrella and be capable of universal computation.]
Actually, I consider your statement different than what Antendren proposed, since he only talked about crisp set theory founding fuzzy set theory. Your statement, as I understand, says that crisp logic can found fuzzy logic and fuzzy logic can found crisp logic. If this holds AND fuzzy logic works out as faster, it seems that if one has to pick between the two, then one would pick fuzzy logic, for why would someone prefer a slower theory?
Mark,
[By that same kind of argument, number theory can't be defined in crisp set theory. There is no "addition" or "subtraction" operator in crisp set theory. There's no such thing as an object "2". I can represent a 2 as a set,
{empty,{empty}}, but that object has properties in crisp set theory that the number 2 does not have. The number 1 is not an element of the number two in number theory. But in set theory, if I use the sets {empty}, and {empty,{empty}} to represent 1 and 2, then 1 is an element of two.]
So set theory leads us astray with respect to the concept of number. Set theory implies that each natural number n works as a member of n+1. But, our concept of natural numbers, doesn't include such. Consequently, set theory doesn't really capture our idea of a natural number and leads us to a different concept of number than that used in number theory. I don't see a problem here. It just means that set theory doesn't cover all of mathematical concepts. Again, I've tried to provide examples of that from the beginning, and this does so. So what's the problem here? Have I elucidated how crisp set theory doesn't work out as a conceptual foundation for all of maths in terms of its concepts, and that's too much to swallow?
[The point is, what we do in set theory is model other things. We define what other things mean by constructing (crisp) sets to represent those other things, and then defining the operations on those constructed things in terms of functions over sets.]
Sure, you can define what other things mean. But, your example shows that modelling with crisp set theory can imply other notions about objects, which the original concepts did NOT imply, could not imply, and simply work as incompatabile with the original concept. In other words, there doesn't exist a one-one correspondence between the crisp set theoretic concept and that of the theory it models. So what? I don't see a problem with crisp set theory NOT having the ability to cover all of number theory. In fact, as I remember it, Godel says or implies in his preface that a system like crisp set theory basically can't cover all of number theory. Maybe that comes as inaccurate, but remember that Godel's theorems, whatever they do state, apply just as much to crisp set theory as they do to Principia Mathematica.
[Then, when we use fuzzy sets, we would not use the crisp set theory "union", "subset", "member-of", etc., operations - those are the operations of the primitive substrate that we used to build things. We'd use our constructed operations, which are defined in terms of those primitive operations.]
Hmmm... this might work. But, you basically you end up having more operations at work. You have crisp union, crisp intersection, crisp complement, max union, min intersection, 1-a complement, a+b-ab union, etc. With fuzzy set theory you don't need a separate concept of crisp union, crisp intersection, and crisp complement. They work out as special cases of ANY fuzzy union, intersection, and complement respectively. So, a fuzzy set theoretic approach has three less operators and thus works out more parsimonious on [0, 1].
General comment:
[This, despite the fact that I *know* that all the quantum interactions can be modelled just fine in traditional math, and thus run on a normal digital computer. It just shunts the problem back up into a harder computational class, is all.]
Why "shunt" such problems into a harder computational class, when you can just adopt a new viewpoint and have things work out in an (much) easier computational class?
[As I see it, then your "building" of crisp set theory will have different theorems than that of fuzzy set theory. It will have theorems which don't hold in fuzzy set theory,]
Of course.
[I think you've equivocated on your symbols here actually. (2n+1)=(n+1) and (2n)=(-n).]
I've done nothing of the sort. I didn't write "equals", I wrote "represents". The natural number 7 represents the integer 4. The natural number 12 represents the integer -6.
[I don't see how you've defined '-' in the set of natural numbers alone]
In my definitions, I was very careful to never subtract a larger number from a smaller number. If you mean how I would define -Z, I haven't yet, but I can:
2n -Z (2m+1) = 2(n+m+1)
(2n+1) -Z 2m = 2(n+m)+1
(2n+1) -Z (2m+1) = 2(m-n) if m ≥ n
(2n+1) -Z (2m+1) = 2(n-m-1) + 1 otherwise
2n -Z 2m = 2(n-m) if n ≥ m
2n -Z 2m = 2(m-n-1) + 1 otherwise.
[After all 3-5 doesn't happen in the natural numbers, and consequently the '-' symbol becomes nonsensical for assumptions about all natural numbers considered as a closed set.]
True, which is why I don't use it. On the other hand, 3 -Z 5 = 2, because 3 represents 2, 5 represents 3, and 2 represents -1.
[You defined 2n as the integer '-n'.]
I've done nothing of the sort. I haven't defined 2n to be anything. The only things I've defined are +Z and -Z. The bit about certain numbers representing others is entirely informal. It's to aid your intuition, nothing more.
Here's a dirty secret about math - we don't actually care what anything is. All we care about is how things interact with each other. This is why, for example, everything in category theory is always done up to isomorphism.
[You may have, of course, constructed a structure consistent with algebraic axioms used to "look" at integers algebriacally, but I don't see how your structure will necessarily imply something about a reflection... a movement... around an origin in geometry.]
Since the natural numbers imply nothing about geometry, I don't consider it a failing that my integers don't either. In order to start talking about their geometry, you'll need to define some additional structure on the naturals. Specifically, you need notions of distance and angle. Define those for my integers in an intelligent manner, and you'll get the results you desire. Anything that can be proven about the integers can be proven about my construction of them within the naturals. For all intents and purposes, my integers are The Integers.
Doug:
You've just redefined the problem. Your original argument wasn't "Is crisp set theory a better starting point than fuzzy set theory", it was "Can you build fuzzy set theory using crisp set theory". The fact is, you can.
Whether it's the best way to build fuzzy set theory is a totally different question, and one which is irrelevant to the discussion going on here. You claimed that fuzzy set theory can't be built using crisp set theory. Now you're pulling a tactic that we affectionately call the "Gish gallop" - that is, rather than admitting to an error, you change the problem, and claim that it's the problem you were talking about all along.
Doug:
Yes, that was what I was referring to. But it is my understanding that this is the standard claim, set theory can be used to derive a basis for most or all of mathematics.
In any case the gentlemen here claims that set theory and fuzzy set theory are equivalent as whatever basis they provide. (As fuzzy sets can be defined from sets, and conversely sets from fuzzy sets.)
It is your different claim that you need to back up.
I agree - you're starting up a Gish gallop, Doug. That's not a very attractive action. I'm willing to forgive it, though, because you're inspiring a very interesting conversation!
Well, yeah. Because now I'm trying to model neutrosophic fuzzy logic, which is a different theory than classical fuzzy logic. Different theory meets different theory.
Note, though, when I said 'axioms', I was referring more to 'definitions', like what Antendren is doing in his redefining of + and - into +Z and -Z. Neutrosophic logic has different operators, so you need to define different operators when you're simulating it. The ZFC axioms still apply as normal.
Ah, you're a bit mistaken here. The relative speed of the devices is completely irrelevant when asking about computability (which is equivalent here to the talk of simulation). Any problem that one type of computer (or logic) can compute, the other can.
What you are talking about is tractability, or the ease with which it can be done. Different problems can have vastly different tractability on different machines, but that has nothing to do with the relatively computational power. The simplest machine capable of universal computation now is the 2-3 Turing machine - it was just proven a little while ago by a guy in his 20s. The 2-3 TM is horrifically inefficient at even the simplest of problems that watching it is (a sort of beautiful) torture. However, it is exactly as powerful as the computer on your desktop, in terms of computation. Now, doing the sort of computations your computer does would take the 2-3 Turing machine longer than the age of the universe, but the fact is that it *can* do them, and that's what matters. ((In fact, given the digital brain hypothesis, which I see signs of being untrue, the 2-3 Turing machine can do anything *you* can do, as your brain is Turing-equivalent.))
The most important thing here is what Antendren said:
Here's a dirty secret about math - we don't actually care what anything is. All we care about is how things interact with each other. This is why, for example, everything in category theory is always done up to isomorphism.
This is what we keep trying to emphasize, to varying degrees of success. As long as the interactions are correct, the actual makeup of the objects we're using is completely irrelevant. The fact that Church numerals (numbers in set theory, as Antendren showed them) have more structure than the actual natural numbers do doesn't matter one bit - what matters is that when we define + and - correctly, Church numerals give us exactly the same answer that natural numbers would. The actual functions we use are different in Church numerals and natural numbers, but they have the same effects on their respective domains, and so we give them the same name.
Antendren,
[[As I see it, then your "building" of crisp set theory will have different theorems than that of fuzzy set theory. It will have theorems which don't hold in fuzzy set theory,]
Of course.]
Then what you propose does NOT capture what fuzzy set theory does capture. In such a case, a proposal of crisp set theory serving as a basis for fuzzy set theory fails, since they have different theorems.
[Here's a dirty secret about math - we don't actually care what anything is. All we care about is how things interact with each other. This is why, for example, everything in category theory is always done up to isomorphism.]
Fuzzy set theory on {0, 1} interacts with the basic properties of crisp set theory in the same way, but this interaction doesn't work the other way around. The simple consequence comes as that fuzzy set theory qualifies as more general.
[For all intents and purposes, my integers are The Integers.]
Not for the purposes of geometry, as you've already basically admitted.
Mark,
[But it is my understanding that this is the standard claim, set theory can be used to derive a basis for most or all of mathematics.]
Well, I wouldn't deny most. But, all, as in 100% comes as different.
[You claimed that fuzzy set theory can't be built using crisp set theory.]
Well, if one can build fuzzy set theory from crisp set theory, then I'll take Antendren's statement about how different theorems come out. Consequently, if one builds fuzzy set theory from crisp set theory, such a structure necessarily has different results than if one just used fuzzy set theory, which makes it so that one can't derive what fuzzy set theory does from crisp set theory.
[Now you're pulling a tactic that we affectionately call the "Gish gallop" - that is, rather than admitting to an error, you change the problem, and claim that it's the problem you were talking about all along.]
Look, I didn't initially state what the problem was, did I, other than a response to the idea that all of math came from crisp set theory? Remember... I initially used fuzzy set theory just as an example of how we can't derive everything from crisp set theory. Second, if you really want to derive fuzzy set theory from crisp set theory and can do so successfully, then you'll radically change how fuzzy set theorists preferably will do and would have done things. I've already pointed out approximate reasoning, the basis of fuzzy control. Seriously, it would go out the window if the principle of the excluded middle held everywhere, or at least fuzzy logics which have rejected would, such as the Zadeh logic. Newer ideas like mixed fuzzy logic, a technique where you switch logical operators to preserve the basic laws of crisp sets also get thrown out. Buckley and Eslami describe this technique on p. 42 of their An Introduction to Fuzzy Logic and Fuzzy Set . So, such a derivation of fuzzy set theory from crisp set theory simply won't build the same structres as fuzzy set theory will.
Xanthir,
[I agree - you're starting up a Gish gallop, Doug. That's not a very attractive action. I'm willing to forgive it, though, because you're inspiring a very interesting conversation!]
THANKS! I admit there have come points where I've tried to switch the emphasis here, and I certainly have strongly switched the emphasis in what I've written. But, I don't find a clearly formulated original problem, other that of "Does crisp set theory all of mathematics?" which I don't think ever really got stated, but comes as how I meant my original response.
[Because now I'm trying to model neutrosophic fuzzy logic, which is a different theory than classical fuzzy logic.]
Interestingly terminology, but I think it redundant (this IS picky and doesn't affect the content of what you said here whatsoever). Neutrosophic logic happens on
]-0, 1+[, or the interval from numbers infinitesimally less than 0 to those infinitesimally greater than 1. Hmmm... if one used just crisp set theory in trying to develop this, one first constructs the reals, then one constructs a function over the reals [0, 1], then one constructs infinitesimals, then one constructs a function over the union of the reals and the infinitesimals, or over the hyperreal interval ]-0, 1+[. But, the hypperals get defined in terms of sequences. So, now our function maps to a sequence instead of a number. Even if sufficiently mathematically similar to a mapping in {0, 1} or in [0, 1], this comes out conceptually much different.
[Any problem that one type of computer (or logic) can compute, the other can.]
Given sufficient time. But, if time comes as a condition of the problem, as in "integrate of 2yzx^3+3x^7*z^4 within .6 seconds", then I don't see it.
[Now, doing the sort of computations your computer does would take the 2-3 Turing machine longer than the age of the universe, but the fact is that it *can* do them, and that's what matters.]
I certainly don't get what you mean by 'can'. I interpret 'can' to mean that something comes out possible in reality. If a computation C requires longer than the age of the universe to complete, and the universe consists of all that exists (which I take as tautologically true), then C comes out as impossible. So, I don't see how it makes any sense to say that the 2-3 Turing machine 'can' do what my computer does.
Maybe I do mean 'tractability' "or the ease with which it can be done," as you put it. The thing comes as that I don't recognize all too great of a difference between "the ease with which something can be done" and 'possibility'. I can tell you that I didn't really consider such an equivalence until I read a little about possibility theory, especially in reprints of Zadeh's papers. This isn't such a strange viewpoint in some ways, as students who take difficult tests will sometimes say something like "that test was impossible." Well, they might not have literally meant not possible at all, but rather that the test had a very, very high degree of difficulty (or equivalently its degree of ease or tractability came as very, very low).
[The actual functions we use are different in Church numerals and natural numbers, but they have the same effects on their respective domains, and so we give them the same name.]
But, what happens when a structure A has the same effect on domain X, as does structure B, but they have a different effect on domain Y? What happens if you end up calling A and B the same, since you only know about domain K where they have the same effect, but you discover in the (distant) future that there exists domain Z where they don't have the same effect? Surely, I agree you can call them the same to some extent. But, if they got formed in different ways conceptually, and people go on creating new mathematics (which they will, of course), then there exists the possibility that some domain, some day, will come to exist where they don't work out equivalently. Wouldn't we have discovered and recognized such a domain more quickly if we hadn't called them the same? Or shouldn't we at least only call them the same for certain structures and especially... ONLY call them the same if we specify the conditions of calling them the same?
Computability. Yes.
Gregory Chaitin gave one hell of a good plenary talk here in Boston at the 7th International Conference on Complex Systems. He has a new book out...
He quoted Leibniz, and made a good case that Leibniz, among all the other amazing things he did, wrote about Complex Systems. And that Leibniz was a few centuries early in the kind of arguments he made about whether or not we could tell if the universe runs with a small number or a large number of physical laws.
Also was in a nice conversation with Chaitin, Stephen Wolfram, and James Glieck -- quite a group. Gleick has a new book coming out too, and then says he'll wait 5 or 6 or 7 years until the next one.
Not entirely on topic here, except, as I say: Computability. Yes. And not about speed. We do care if a program halts or not, and can't tell in general. And almsot all real numbers are inaccessible, not just uncomputable. We can't even say what they are, when we can't computer their values. So the rant by the gentleman from UNSW about constructability and being dubious about sets and reals is not entirely off the mark, in the area that Turning mostly wrote about in that 1936-1937 paper (70 years ago) from which everyone likes to talk about Turning Machines. It was mostly a demonstration of how to construct a specific uncomputable number. Chaitin jokes that Turing need not have worked so hard -- pick a real number at random. With probability 1, it is uncomputable.
Doug: If I have been counting correctly you have now invoked the ghost of Gödel for the third time and to put your mind at rest: Yes! You do remember correctly.
The most extensive formal systems constructed up to the present time are the systems of Principia Mathematica (PM), on the one hand, and, on the other hand, the Zermelo-Fraenkel axiom system for set theory... In what follows it will be shown that this is not the case, but rather that, in both of the cited systems, there exist relatively simple problems of the theory of ordinary whole numbers which cannot be decided on the basis of the axioms. On Formally Undecidable Propositions of Principia Mathematica and Related Systems I by Kurt Gödel. Taken from Martin Davis' The Undecidable
You then write:
The simple straightforward answer is no! Gödel's proof applies to any system that is "rich enough" (the original German phrase is "reichhaltig genug") to construct arithmetic. As JVP has already pointed out it does not apply to the integers with only one single operator (either addition or multiplication) but as soon as you use both operators then Gödel applies no matter which system you are using as your foundation.
A further point that many people seem to misunderstand is that Gödel only applies if you restrict your methods of proof to finite ones. As should be well known, but unfortunately appears not to be, the consistency of arithmetic was proven by Gerhard Gentzen using transfinite methods (mathematical induction to epsilon null) in 1936.
A final nit pick, as I have argued once already with Torbjörn here on MarkCC's blog the correct spelling of Gödel's name if you don't have access to the letter "ö" is Goedel and not Godel.
That should of course read "Epsilon-Zero" and not "Epsilon-Null" as I was writing English and not German! Sometimes living and thinking bilingually has its pitfalls.
Doug, your claim to have proved arithmetic based on ZFC to be inconsistent would of course contradict Gentzen's proof of the consistency of arithmetic!
[Then what you propose does NOT capture what fuzzy set theory does capture. In such a case, a proposal of crisp set theory serving as a basis for fuzzy set theory fails, since they have different theorems.]
However, every theorem of fuzzy set theory, under appropriate translation, will hold in crisp set theory. Conversely, every theorem of crisp set theory that has the appropriate form comes from a theorem of fuzzy set theory.
To go back to the analogy, the following is a theorem of the integers: "For every x there exists a y such that x + y = 0." This is not a theorem of the naturals. But its translated version is: "For every x there exists a y such that x +Z y = 0."
[Not for the purposes of geometry, as you've already basically admitted.]
I have not. I said that once you define an appropriate notion of distance and angle, they are the integers as far as geometry is concerned.
Thony,
[Doug: If I have been counting correctly you have now invoked the ghost of Gödel for the third time and to put your mind at rest: Yes! You do remember correctly.]
Well, then I'd hypothesize the problem works out even worse for fuzzy set theory in extenso. I mean fuzzy set theory once it includes fuzzy arithmetic, fuzzy calculus, etc. once they get rather well-developed, since those fields work out more complex than number theory. Although, I think I've read some people actually claim much of fuzzy set theory as decideable, although I didn't read in depth... so I probably didn't understand the conditions at work.
[The simple straightforward answer is no! Gödel's proof applies to any system that is "rich enough" (the original German phrase is "reichhaltig genug") to construct arithmetic. As JVP has already pointed out it does not apply to the integers with only one single operator (either addition or multiplication) but as soon as you use both operators then Gödel applies no matter which system you are using as your foundation.]
What differences between '+' and '*' exist in a formal definition of them? Having not seen such, I'd think '*' would get defined in terms of repeated '+', as at least I remember I got taught this about '*' when I originally learned arithmetic.
[Doug, your claim to have proved arithmetic based on ZFC to be inconsistent would of course contradict Gentzen's proof of the consistency of arithmetic!]
Huh? I didn't make a claim about the inconsitency of arithmetic. It concerned the logical operators involved. At least so far as I recall, I don't see exactly what I said. Second, even in my original statement, such an inconsistency only applies under the conditions that we regard statements such as a^c(a)=0, or a v a=a as always true. I still don't think it mere wordplay to regard to regard a union operator like min(1, a+b) as the classical union operator. After all, min(1, a+b) interacts with the logical symbols/numbers involved in the same way as the
"both a and b" operator for the domain of the classical indicator function. In other words, on {0, 1} we could call min(1, a+b) and "both a and b" identical, or the same. Of course, this comes as misleading if we want to deal with
[0, 1]. If nothing more this preferably would indicate some problem with giving the same name to different operators which have the same effect on their domains. After all, a 1950 set theorist might have thought it prudent to call min(1, a+b) and "both a and b" the same for a function on {0, 1}. But, of course, we know of theories today where this doesn't hold.
Antendren,
[However, every theorem of fuzzy set theory, under appropriate translation, will hold in crisp set theory. ]
If you can translate it into crisp set theory. I seriously doubt this. How does one translate ideas like "degrees of truth" and the like into crisp set theory or crisp logic?
I admit you've already done more than I thought you could, but it can get complicated. How does one translate theorems of pairs of operators for union at use in a mixed fuzzy logic into crisp set theory? Maybe you can do it theoretically... but it would seem to require almost excessive precision and conditions, so the degree of difficulty comes as rather high. Well, if one interprets possibility in terms of degree of ease, then it becomes not very possible.
[Conversely, every theorem of crisp set theory that has the appropriate form comes from a theorem of fuzzy set theory.]
Well, that's easy. Restrict the fuzzy set theorem to {0, 1} and you've got it, at least so far as I can tell. Maybe exceptions to this exist... well one might exception might come as using alpha cuts on convex fuzzy sets, which don't so much restrict values to {0, 1}, but change values so that they come as 'greater' and 'lesser'.
I can't remember that discussion. But my opinion is that I prefer the simpler "Torbjorn", while "Torbjoern" is more phonetically correct. There is no ambiguity here, and I can't think of any other example either. And in the specific case of my name, the phonetic difference is mild.
But you can see attempts in swedish (pre UTF-8) to spell our letters "å, ä, ö" as "ao, ae, oe". And of course another consideration is the more explicit danish-norwegian alphabet of "æ, ø, å".
According to the US Wikipedia the german umlauts (not part of the alphabet) "ä, ö and ü" would be transcribed as "ae, oe and ue": "simply using the base vowel (e.g. u instead of ü) would be considered erroneous by German speakers and is prone to producing ambiguities".
I'm not sure how the austrian Gödel would have preferred his spelling, but it can be assumed that unless otherwise indicated Goedel is the correct guess. I assume I have conflated my private preference with the german's, but I hope I know better now thanks to Thony.
Speaking of phonetics and ambiguities, I should note that the swedish letters stands for phonemes, not diphthongs. (I'm not so well versed in other languages to discern how it is there, and how it plays against possible ambiguities.)
But it so happens that the diphthongs comes closer to the target monophthong than the "incorrect" phoneme.
Yes, we have been here before and you made the same comment about the spelling of your own name last time round.
Gödel was born a citizen of the Austro-Hungarian Empire, became, much to his disgust, a Czech citizen with its dissolution and became a naturalised American citizen when he fled to America. However Austrian, German and Swiss spelling has been standardised through a series of trilateral agreements since 1900 six years before Gödel's birth, so even for an Austrian Goedel.
Thanks, now I remember. Well, it is an explanation of why I did something germans considers erroneous, while speaking a language of germanic descent.
Now I believe you repeat part of your earlier comment as well. :-P
But thanks, one less misunderstanding of english. A few more to go.
This is an old post, but I wonder if you could clarify what you meant by group theory requiring real numbers? There's a great amount of group theory that can be done without the reals (and the axiom of choice!). Finite group theory is a pretty big theory.