*(note: this post has been closed to comments; comments about it on other pages will be deleted!)*

**UPDATE!!:** The saga continues at this post.

**MORE UPDATES, WITH REFUTATIONS!**

Every year I get a few kids in my classes who argue with me on this. And there are arguers all over the web. And I just know I'm going to get contentious "but it just *can't* be true" whiners in my comments. But I feel obliged to step into this fray.

.9 repeating equals one. In other words, .9999999... is the same number as 1. They're 2 different ways of writing the same number. Kind of like 1.5, 1 1/2, 3/2, and 99/66. All the same. I know some of you still don't believe me, so let me say it loudly:

Do you believe it yet? Well, I do have a couple of arguments besides mere size. Let's look at some reasons why it's true. Then we'll look at some reasons why it's not false, which is something different entirely. The standard algebra proof (which, if you modify it a little, works to convert any repeating decimal into a fraction) runs something like this. Let x = .9999999..., and then multiply both sides by 10, so you get 10x = 9.9999999... because multiplying by 10 just moves the decimal point to the right. Then stack those two equations and subtract them (this is a legal move because you're subtracting the same quantity from the left side, where it's called x, as from the right, where it's called .9999999..., but they're the same because they're equal. We said so, remember?):

Surely if 9x = 9, then x = 1. But since x also equals .9999999... we get that .9999999... = 1. The algebra is impeccable.

But I know that this is unconvincing to many people. So here's another argument. Most people who have trouble with this fact oddly *don't* have trouble with the fact that 1/3 = .3333333... . Well, consider the following addition of equations then:

This seems simplistic, but it's very, very convincing, isn't it? Or try it with some other denominator:

Which works out very nicely. Or even:

It will work for any two fractions that have a repeating decimal representation and that add up to 1.

Those are my first two demonstrations that our fact is true (the last one is at the end). But then the whiners start in about all the reasons they think it's false. So here's why it's not false:

- ".9 repeating doesn't equal 1, it gets closer and closer to 1."

May I remind you that .9 repeating is a *number*. That means it has it's place on the number line somewhere. Which means that it's not "getting" anywhere. It doesn't move. It either equals 1 or it doesn't (it does of course), but it doesn't "get" closer to 1.

- ".9 repeating is obviously less than 1."

Hmmmm...it might be obvious to you, but it's not obvious to me. Is it really less than 1? How much less than 1? No, seriously...tell me how much less? What is 1 minus .99999999...?

Really???? *Infinitely many* zeros and then after the *infinite* list that *never ends*, there's a 1???? Surely that's stranger than the possibility that .9 repeating simply does equal 1. Or for something even stranger, consider this: if .9 repeating is less than 1, then we ought to be able to do something very simple with those two numbers: find their average. What's the number directly between the two? Or for that matter, name *any* number between the two. Let me guess: the average is .99999...05? So after this *infinite* list of 9s, there's the possibility of starting up multiple-digit extensions? Doesn't that just raise the obvious question: What about .9999999...9999999...? Namely, infinitely many 9s, and then after that infinite list, there's *another* infinite list of 9s? How, exactly is that different from the original infinite list of 9s? If you saw it written out, where would the break between the lists be?

I'm afraid that if you apply the "huh??" test of strangeness, you get a much higher strangeness factor if you say that .9999999... is *not* 1 than you do if you say it *is* 1.

- "Uhhhhh, I'm sorry, but I still don't believe you. .99999... just can't equal 1."

Well, let's look a little more carefully at what we really mean by .999999...:

This equation isn't really up for debate, right? It's simply the meaning of our place value system made explicit. That thing on the right hand side is called an infinite geometic series. They have been studied extensively in math. The word "geometric" means that each term of the series is the identical multiple (in this case 1/10) of the previous term. The **definition** of the sum of an infinite geometric series (and other series, too, but we won't get into those) goes something this:

- Start making a list of partial sums: the sum of the first one number, then the sum of the first two numbers, then the sum of the first three, etc.
- Examine your list closely. In this case the list is: .9, .99, .999, .9999, .... (Note that the actual number .99999.... is not on the list, since every number on the list has finitely many 9s.)
- Find some numbers that are bigger than every single number on your list. Like 53, 3.14, and a million.
- Of all the numbers that are bigger than every number on your list, find the smallest possible such number. I think we can all agree that this smallest number is 1.
- That smallest number that can't be exceeded by anything on the list is the
**definition**of the sum of the geometric series.

Notice that I keep putting the word **definition** in bold face. (See, I did it again!) That's because it's a **definition**, which isn't really up for debate. It is the nature of a mathematical definition that once you acccept it, you have to agree to its consequences. In other words, .99999... = 1 by the **definition** of the sum of a geometric series. It's also true if you use the popular formula

a/(1 - r) with a = 9/10, and r = 1/10.

We're left with this: merely *saying* ".99999... doesn't equal 1" admits the fact that this number .99999... exists. And if it exists, it equals 1 by definition. The only way out for you now, if you still don't believe it, is to have a different working definition of the sum of an infinite series (go talk to some math professors, and see how far you get) or to deny the very existence of the number .9999.... I have seen a lot of people doubt that the number equals 1, but very few of them are willing to deny the very existence of that number. If you want to play "there's no such thing as infinitely long decimal representations," I'm afraid you won't get very far, because there's always the number pi to worry about, too, you know.

Okay, so there's my rant. .9 repeating equals one. No, I'm sorry, it does.

Brent,

Since my last reply to you is quite long, I understand that you need time to mull it over.

However, I would really like it if you would work with the real number set (as defined) rather than sticking your own made-up (and contradictory) definitions to it. I have tried telling you this before by using an example (which you found insulting, and then completely ignored its very, very simple point) and giving you a link to the actual definition of the real number set.

But then I read this:

Brent wrote:

[But I should say that I am finding myself to be in the constructivism philosophy of math category.]

I take this to mean that you would rather construct your own number system than work with an established one. If this is your intention, then it is a waste of time to argue with you since you would simply ignore everything I or anyone else are saying and just make up your own number world for the express purpose of saying that 0.999... != 1. Fine, go ahead. But please do not do it here, since it would be completely off-topic (you defining your own number system has nothing to do with the established real number set and its conclusions).

It's one thing to have misconceptions. It's another thing entirely when you admit that you are deliberately going to ignore established definitions and instead will make up your own for each of the math terms discussed.

Brent wrote:

[After reading the last few posts, I am going to have to agree that 0.999.. is not a number because it is a process.]

..."agree"? Which posts are you "agreeing" with? Do you have a new definition for the verb "to agree" to go along with your redefinition of mathematical terms?

Wait, I forgot, you're going by your very own personal definition of "number", in which fractions like 1/3, 2/7, 4/13, 5/14, 8/9, 4/34, etc. are all "non-numbers" when in decimal form. According to you, the ONLY things that can be "numbers" in decimal form are fractions in which the denominator can be reduced to either 1, powers of 2, powers of 5, and/or product of multiplying powers of 2 with powers of 5.

Go on, try to do some calculations on your calculator and see what you get for various fractions. Or, better yet, get a paper and pencil and do the long division until you either reach a termination point, or you realize that there's a pattern repeating.

I can go on to try to explain further, but that would involve using established definitions of the decimal system and number bases, of which you are going to completely ignore and replace with your own versions of, so I will stop here for now.

As for correcting your answers to questions 1 and 2, would you also take the time to explain how multiplication works for you? Is "2.4 x 3" the same as doing "2.4 + 2.4 + 2.4"? Is multiplying a number by 10 the same thing as adding up ten of that number?

If not, please state how your version of multiplication works when multiplying with non-10 numbers (since, according to you, multiplying by 10 shifts the decimal point and sticks on a zero at the right-hand "end" of a number).

If so, does it strictly follow that the decimal point is moved at some point during the addition?

Posted by: Monimonika | March 28, 2008 at 11:02 PM

I'm guessing none of those who disagree that .9999....(recursive) is equal to 1 have ever taken discrete math in college. Of if they have, they failed because they didn't pay attention. What arguing do you need? There's a formal proof on the initial post of this blog. Either find an error in one of the steps of the proof and show that it is indeed an error or shut up.

Posted by: Cameron | March 29, 2008 at 08:28 PM

0.33333.... is always a bit lesser than 1/3, so you cannot prove that 1/3 + 2/3 =3/3 is the same as 0.33333....+0.66666....=0.99999....

Posted by: yumwind | April 01, 2008 at 10:19 PM

Hilarious !!!

I strongly advise to not state

1 = .999... at a party, you'll

only get burnt by the mob.

Posted by: Anonymous | April 07, 2008 at 12:53 PM

Just wondering, is .9 repeating a real number? i thought all non-ending non-repeating decimals were real but that if they repeated they weren't real. but i haven't had math in years so i'm probably wrong. if you could email me the response that would be great otherwise i may not get it.

Posted by: Justin | April 07, 2008 at 06:39 PM

No, they're both real. Rational numbers have repeating non-terminating decimals, or they are terminating. The more accurate definition of a rational number, however, is that it's a fraction, p/q, and p and q are integers. And irrational numbers are the opposite: they're non-terminating and non-repeating decimals, and you can't write them as a fraction like with a rational number. So 2^1/2, pi, e, etc. are good exampls of them. Either way, irrational and rational numbers are real. Complex (or imaginary) numbers are not real.

Posted by: Angelus-Mortis | April 08, 2008 at 05:18 PM

And yumwind, 0.333... IS 1/3. If 0.333 eventually terminated, then it wouldn't be quite 1/3. The problem with the way you see it is because you can't write out all the 3's after the decimal place because there are an infinite number of them.

Look over some of the proofs again. They explain quite clearly how these decimals are exactly the values you say they aren't.

Posted by: Angelus-Mortis | April 08, 2008 at 05:20 PM

This is true.

When I was 12 years old a Maths Teacher taught it by writing 0.9999999 on the board, and asked us to express it in a different way.

The way I chose was "1 - a reciprical of a very large number" (i.e. 1 - 1/infinity). If there was such a way to express infinity, 1/infinity would be near as possible to 0, if it wasn't, then add one to inifity and that would be nearer 0.

You don't have to put a definite value to something to know it's effects. 0.9reoc. = 1.

There's a difference between us engineers and you scientists, we get the girl!!!!

Posted by: andysuth | April 10, 2008 at 10:52 AM

There's a difference between engineers and mathematicians, in that engineers would be content with a mere example, but mathematicians could not, and would only be content unless they had a rigorous proof.

An example does not prove your point because it is still possible to find a counterexample.

Posted by: Angelus-Mortis | April 12, 2008 at 01:55 AM

So the implication here is that, if you have a number infinitesimally close to one, adding an infinitesimally small number to it won't create a sum equal to one?

Posted by: Andrew | May 13, 2008 at 08:40 AM

Andrew,

First off, understand that the whole 0.999... = 1 thing is based on the accepted definitions of decimal notation and the set of Real Numbers.

From what you wrote, it is obvious that you are already convinced that 1 > 0.999..., and that you think you can count real numbers in sequence just like you can with integers.

Integers:

... -2,-1,0,1,2,3, ...

Reals:

... 0, ?

(Wait, what comes after, or even before, 0?)

Changing the definition of what the decimal notation means (as in, switching out "0.999..." and replacing it with "0.999...9" or "0.999...0") invalidates your (and other deniers') entire argument.

It is understandable to have prior misconceptions. But it's considered dishonest if, after looking up the definition of what the reals are, you still insist on sticking to your misconceptions.

There is no such thing as an infinitesimally small number in the Reals, nor can it be represented with proper decimal notation. Look up what decimal notation is before you try answering back with a made-up number like "0.00...1".

Between any two real numbers that are not equivalent, there are infinitely many other real numbers. This is why there is no such thing as "a number just below/right next to 1" in the reals. The fact that you cannot find even a single other real number between 1 and 0.999... means that they are the same number.

Posted by: Monimonika | May 13, 2008 at 12:40 PM

Though the thread is very old, I cannot resist commenting on it. It seems to me that, as some have said, the reluctance of people to accept the proposition .9-repeating = 1 is a matter of symbolic confusion.

However, I don't think the confusion is in the representation of the numbers involved. My candidate for the problem is "=".

If "=" is taken as "identical to", then when the rigorous meaning of that in math is conflated with the everyday meaning of "identical" confusion ensues. Worse, people take the non-rigorous definition and insist of special rigor because "numbers are precise".

If it is taken as "evaluates to" then it can be understood as a mathematical operation, and not a factual comment that the two things are identical in every particular. They are identical in mathematical terms because the evaluate to something that is interchangeable for every mathematical purpose.

On the other hand, if we take the everyday concept of "approaching indistinguishable", and add the arbitrary rigor people want to place on numerical representations, that is, they must not differ at all (in this case the number of digits in the representation is clearly make them "different'!), we end up with a "self-evident" proof against equality.

This is a philosophical problem at many levels. Wittgenstein warned against assuming that just because a word can be *properly* applied to different things it is not a proof that those things share a fundamental essence. We humans tend to do this with abandon and confuse ourselves mightily.

I hope that was cogent enough to be worth reading. Thanks for a nice post.

Posted by: Yaakov | June 05, 2008 at 04:16 PM

Yaakov,

I disagree that the main problem is what nay-sayers think "=" means. Yes, there are definitely those who really do seem confused on the meaning of "=". However, from what I see, the majority of the naysayers insist either that there's a non-zero difference between 0.999... and 1 (namely, 0.000...1), or that 0.999... is not a real number that exists.

In neither instance is the meaning of "=" directly disputed (the latter comes close, but is more about notation than anything).

The naysayers who argue that 1 is an integer and 0.999... isn't, therefore they can't be equal, may be what you mean by those who do not understand what "=" means. If there are others, I am interested in having them pointed out.

Posted by: Monimonika | June 09, 2008 at 01:45 PM

Even though infinite .99999999 will never =1. When, in reality, will that last infinite .1 of unknown measure, unit, etc. ever even matter. Besides, when will something ever be created that will behold the exact measure, unit, etc. of 1.000000 to the last billion or whatever decimal place? So really, in the real world in which all these numbers are put in to work, the chances in which you will ever have something exactly the number you want it when taking account for those infinte decimal places, will never happen. So really 1:1 .999999:.999999 will only ever exist in your head, so stop wasting your life.

Posted by: fjphello | June 14, 2008 at 09:51 PM

Here's another argument that certainly differs from all the others here. (And btw, please stop asking me to name a number between 0.99~ and 1, it gets annoying)

Okay, let's turn this whole thing upside down for a minute. Math is a cool tool, and it certainly helps us understand the world around us, but so far, the physical world would seem eternally divisible. If the physical world could somehow be divided into multi-dimensional pixels, those would obviously be further divisible. Keeping this in mind, it really makes no sense to speak of integers at all.

Integers are convenient for solving equations that tell us important things about this universe, but it is the integers that are disconnected from our world, where there is no such thing as absolute precision. Absolute precision is the convenient shortcut created by man to solve equations, NOT infinity, which is abundantly present in the real world.

People are saying that infinity is a strange concept we can never understand but just have to accept. If we can't describe the real world with logic, maybe logic itself is the true approximation?

Thus:

0.5 is really 0.5000..

1-0.5 is then 0.49999.

0.4999 = 0.5000, but 1/2 is a flawed but convenient illusion.

Posted by: Kristian | June 15, 2008 at 05:17 PM

Most people here don't seem to realize that .1~ does not equal 1/9. Nor does .3~ equal 1/3. It's the most common way of representing it, but it is not exactly equal.

There may be no number between .9~ and 1, but that doesn't mean that they're the same, because .9~ isn't a number. It's a concept. Infinity isn't a number. It's a concept.

Posted by: MOOT | June 15, 2008 at 10:13 PM

MOOT, that is exactly my point, except the other way around. There is no such thing as absolute precision in the physical world. Sure you can have 3 oranges, but there are no fundamental physical equations that use integers for parameters - indeed quantum mechanics and superposition is an extreme example of this.

Infinity and eternal divisibility is very real in space and time, it is the integers that are only a human concept, where quantifying reasonably similar collections of particles and their infinitely complex correlations is a useful generalization.

Posted by: Kristian | June 16, 2008 at 02:09 PM

This question is not on the same topic but in the same category I believe (where our normal logic doesn't always hold true). Isn't there a way, using normal fraction rules, that 2 minus 2 does not equal 0 or something along those lines?

Posted by: David | June 25, 2008 at 03:12 PM