There’s a question I like to ask random people: where is the flaw in the argument that because ? I very rarely get a satisfactory response. Usually the answer is that “you’re not allowed to multiply both sides by zero.” But we can come up with a slightly subtler argument: because . Some just don’t answer, other will insist that its not allowed… To me it suggests something is deeply wrong with how most people understand algebra.

They don’t know mathematics, they know voodoo-mathematics, a series of mysterious steps that result in their test being returned with a checkmark beside the question.

Now it may seem that I’m being a pedant. After all, they know it isn’t true; what does it matter if they can’t tell me why? But even if we set aside the fact that it simply feels wrong to not understand why the math works, it has practical implications because there are cases where the mistake won’t be as overt as above. And then these people won’t see the mistake.

So I’d like to use this essay to go over grade school algebra from a different perspective.

The first thing we need to do is introduce the idea of implication. is said to imply if being true means that must be true. We denote implies as or . If implies and implies we say that is true if and only if (shortened to iff) is true. We denote this .

Now consider an equation. It is an assertion and is either true or false. For example, is true but is false. On the other hand, as things get more complicated it may not be immediately obvious whether an equation is true or false and once we start adding variables, the equation being true or false is contingent on unknowns. So we become interested in the web of logical interconnections that exist between them. This is the domain of algebra.

For example, it may so happen that accepting that a specific statement is true implies that others are. Or we may be able to demonstrate that a statement that is not immediately clear to be true can be reduced to something that is.

We need one more thing before we can begin doing something interesting, the idea of a function. A function is a map between two sets A and B (write: ) that maps every element of the first set to a specific element of the second set. The most common way to describe a function is as <expression>, for example is a function that doubles the input.

Now, because a certain input always maps to the same output, we get our first lemma: . For example, since we can apply to both sides.

Now we can reconsider the original question asked. What is wrong with the argument that because ? Well, -1 = 1 would certainly imply that 0=0 since we could apply to both sides. But the implication does not got the other direction. There is a very big difference between and ; not recognizing these differences is sadly common.

Now, one might ask if there is a way that we can invert the implications. Clearly the implication doesn’t reverse for all functions, as demonstrates, but on the other hand it is fairly easy to see that it does for . The type of functions the reverse implication holds true for are called injective functions. They never map two different values to the same value, so we get our second lemma: for an injective function , .

OK, let’s consider another fallacious argument: since , and thus . This might seem to follow from our first lemma: we begin with two things that are equal and apply a function, right? Wrong, actually. The mistake is that square root is not a function. It’s what we call a multifunction, every value maps to multiple values. In the case of square root, can be or , sometimes written . These two variations of the answer are called branches.

A more complicated example of people getting tripped up by multifunctions comes from manipulating expressions (which we will talk about at greater length shortly). You see, people who know the rule that and find out that will sometimes notice that and be confused. What is going on? is a multifunction! In fact, log has an infinite number of branches!

So what rules can we come up with for dealing with multifunctions? First of all, a multifunction can still be injective, in which case our second lemma still holds. The second thing that can happen is that we can cut a particular branch of the multifunction and use that. Finally, we can use it as a multifunction and consider all cases, in which case implication is preserved.

Let’s consider the difference between these with the example of . First of all, since is injective, if or that would imply that (ie. , ). Secondly, if we restrict sqrt to the positive `main’ branch, we get (absolute value because even if is negative, the main branch of sqrt applied to must be positive). Finally, we can consider it as a multifunction in which case we get (ie. or ; there were four cases, but ultimately it only mattered if they signs were the same or different).

And things become more complicated as we add more variables and equations. And they become yet more complicated when you start to look at relations other than equality; for example what preserves the greater than inequality? (Answer: Strictly increasing functions.) And notice that the objects of our equation don’t have to be numbers; they could even be sets or functions… The network of implications between equations has yet more complicated rules in these cases.

These are very interesting things to consider, but let us restrict ourselves to the consideration of real numbers under equality. The lemmas we developed over the course of this essay are sufficient to approach a wide variety of problems. These are simple rules that one can learn in a brief sitting.

Just like one can easily learn the rules of chess and yet that will not even begin to make them a good chess player, one can easily learn the rules of algebra and yet that is very different from being good at it. So how do you get good? You play, and you look at good games. You learn techniques that can be used in common situations. And that’s how you become good at algebra.

Many of the most important techniques to learn have to deal with manipulating expressions, turning them from one form into an equivalent ones. Identities like exponent laws and techniques like factoring and expanding. And you may think that this means that every time you come to a new scenario, you will need to learn a new technique. Not so! Patterns recur. For example, parallels . These recurring patterns are the motivation behind abstract algebra where, instead of studying specifics like addition and multiplication, we study algebraic structures which are formalizations of these recurring rules, like groups, rings and fields.

In conclusion, the heart of grade school level algebra is the logical interconnections between formulae. And yet, the only thing that is ever taught seems to be techniques for manipulating expressions which, while an important part of algebraic skill, are almost useless without an understanding of the logical structure between the formulae themselves. Besides which, teaching algebra along the lines I’ve outlined would teach logic which I believe students should take away from their mathematical education, if nothing else. Sadly, this too is neglected in favor of whatever trivia it is decided should be taught in math class.

April 19, 2011 at 04:44 |

I am unsure of a few things I hope you’d be able to clear up. First of all, here’s an excerpt from your paper: “For example, since we can apply f(x) = x +1 to both sides…” Your refering to your statement, “x-1 = 3 implies that x = 4.” How can you apply f(x) = x +1 to both sides? Why x + 1? I’m confused.

Wonderful paper, brings some delicious insight into our blindingly corrupted education system.

April 21, 2011 at 16:30 |

>… Your refering to your statement, “x-1 = 3 implies that x = 4.” How can you apply f(x) = x +1 to both sides? Why x + 1? I’m confused.

Like so:

x-1=3

f(x-1)=f(3)

(x-1)+1 = 3+1

x=4

>Wonderful paper, brings some delicious insight into our blindingly corrupted education system.

Thanks you.

July 31, 2012 at 22:47 |

Chris, I think that this bit of the blog needs editing to make this point crystal clear. “both sides” could refer to the implication, rather than each equation. This is what confused Mariokart (it is remarkable that he was able to pinpoint his point of confusion).

Suggestion 1: separate the equations in question from the expository text.

Suggestion 2: clarify “both sides” to “both sides of any equation”. And describe in english that this generates a new equation that is implied by the first (I know that is what you lemma means, but some readers don’t).

Suggestion 3: make it clear that a lot of this is just rewriting, and show the steps.

September 10, 2011 at 09:33 |

great paper!

September 27, 2011 at 21:50 |

Hi Chris,

This paper was really lucid! Interestingly for me, its very lucidity exposed the outlines of my own deficits in both numeracy and abstract cognition: I had to translate a lot of the numbers into objects – specifically apples – and then think about why negative two apples was not the same as two apples, and why translating numbers into apples breaks down pretty quickly, and what the difference is between things and numbers…

I’m only sort of kidding. That’s really about where I am, mathematically speaking.

But the experience of thinking through your paper raised some interesting questions for me about the gap between concrete and abstract thinking that has to be crossed in order to do math *at all*. Do you remember crossing that gap at some point? Or do you experience a capacity for abstraction as sort of an inborn orientation? Now I’m really curious…

September 28, 2011 at 16:37 |

I don’t think there’s

oneparticular point in mathematics where you go from concrete thinking to abstract thinking. It’s more of a journey where you keep building and learning abstractions as you go along.Imagine seeing nature for the first time, with no prior knowledge. At first you see these flat green things around you, but you quickly realize that they are all a similar type of objects, which we usually call leaves. And you notice that they clump together on other objects, trees. And you can look at how trees interact and reason about that, and come up with rules about how some trees prevent other types from growing near them, while other clump together. And eventually you start to think about groups of trees of the same kind, and larger groups containing many kinds, and the animals that interact with them, and you start to see the forest and then the ecosystem with in it, and the the biome…

The point of that rambling is that there isn’t a single “I see abstractly” realization, but rather a sequence. Once you see the forest in the trees you can still look at it more abstractly and more abstractly yet. And that doesn’t obsolete the earlier abstractions, some times you need to look at a particular thing more closely.

So I don’t remember the moment I first understood early abstractions (and I’ll bet you don’t either — when did you first get the idea of one?) but I do remember the first time I understood many others. And there’s still lots I don’t understand, don’t ask me about cohomologies or Galois groups, for instance. Furthermore, there’s this really strange common thing where one understands an abstraction or at least thinks they do, but is suddenly exposed to an idea that makes things click and they see it in a new deeper light — I have had that happen with the idea of a derivative so many times at this point that I’ve lost count.

Does that answer your question?

December 12, 2011 at 23:58 |

Very interesting essay, thanks for making it available.

You may enjoy the book “Negative Math” by Al Martinez.

http://press.princeton.edu/titles/8026.html

December 22, 2011 at 18:52 |

Thanks for the suggestion