Multiplication by Zero

If you made it through that last post and thought about it for a while, you might think that I pulled a fast one.

At a few places I commented that if x is any element of an arbitrary ring, then we know that x0 = 0.

That is a certainly a familiar fact about the integers. There we think of x times y as representing the number of objects in a rectangular array with x items along the length and y along the width. If either x or y is 0, then your array will contain nothing at all.

But things are less clear when you are working in an arbitrary ring. In this context “multiplication” is merely an operation that happens to satisfy certain rules. It doesn't have to look anything like the familiar multiplication of integers.

The element 0, on the other hand, is defined solely in terms of addition. It is the unique ring element y with the property that x+y = x for all ring elements x. No mention of multiplication at all. And addition is itself merely a binary operation that happens to possess certain useful properties.

So why should it be true that an arbitrary ring element, multiplied by the additive identity element, always returns the addtive identity element back again?

The answer lies in the distributive property. In any ring, addition and multiplication are required to interact. With that in mind, watch how the trick is done.

Let x be an arbitrary ring element, and let 0 denote the additive identity element. Then:


x0   =   x(0+0)   =   x0+x0.

But we know that whatever ring element x0 happens to be, it must have an additive inverse. And when we add that additive inverse to both sides of our equation, we get:


x0-x0   =   x0+x0-x0,

which implies that x0 = 0, as desired.

So no matter how abstract your rings get, it will always be true that anything times zero is zero.

More like this

Some of the commenters to yesterday's post raised some interesting questions on the subject of dividing by zero. So interesting, in fact, that I felt the subject deserved another post. My SciBling, revere, of Effect Measure: writes the following: OK, I shouldn't jump in here because I'm an…
If you're looking at groups, you're looking at an abstraction of the idea of numbers, to try to reduce it to minimal properties. As I've already explained, a group is a set of values with one operation, and which satisfies several simple properties. From that simple structure comes the basic…
When I learned abstract algebra, we very nearly skipped over rings. Basically, we spent a ton of time talking about groups; then we talked about rings pretty much as a stepping stone to fields. Since then, I've learned more about rings, in the context of category theory. I'm going to follow the…
When I first talked about rings, I said that a ring is an algebraic abstraction that, in a very loose way, describes the basic nature of integers. A ring is a full abelian group with respect to addition - because the integers are an abelian group with respect to addition. Rings add multiplication…

Jeez, I just got addition by zero down, and now I've got to figure THIS out???

This maybe belongs below, but it'll get read more here. Dr Anderson (does that remind anyone else of the matrix?) will respond On the BBC tomorrow. Should be fun.

He has his own website (always a bad sign). There are two papers dealing with this nonsense. The first seems okay to me, if rather trivial. The bit where he tries to explain how this is better than "NaN" is laughable however.

The second paper on Calculus shows some real problems though. On page 8 he claims, to much comedy, that

sin(x) = sqrt( 1^x - cos^2(x) )

for all "transreal" x. Really? Lets take x = 1.5*pi (pi=3.141592... as usual) a normal real, so

-1 = sin(1.5*pi) = sqrt( 1^x - 1 )

Now I guess he defines sqrt(y) as y^{1/2} which is defined as exp( 0.5*ln(y)) on page 5. Hence

-1 = exp( 0.5*ln(1^x-1) )

Oh dear! He defines exp(y) to be +ve or +infinity or Nullity. That is, never negative.

Sorry for being long winded: obviously the fool has forgotten that one needs to take +ve and -ve square roots, but it's not a good sign frankly.

Section 6.2 is also more comedy. Take the function sin(x)/x. As an undergrad mathematician will know, this has a removable singularity at x=0. That is, sin(x)/x tends to 1 as x tends to zero from any direction in the complex plane (or just from +ve and -ve values of x, if you are happier with real numbers). If you draw it, you get a lovely smooth curve which gets close to 1 at x=0. Although sin(0)/0 isn't technically defined, it's obvious that it should be defined to be 1 at x=0. It's called a "removable singularity" as, essentially, making this definition causes no problems.

Sadly Anderson wants to define sin(0)/0 to be Nullity, meaning that it becomes discontinuous at x=0. But draw the picture! Furthermore, all of the lovely theorems from complex analysis (Cauchy integral etc.) are now going to fail, whereas with sin(x)/x=1 at x=0 they work fine. I think most mathematicians would consider this to really be throwing the baby out with the bath-water. Dr Anderson is a joker; it's a shame Reading is closing it's physics department, and not sacking Anderson for being a joker.