IT APPEARS to most of us that computers can do tons of math calculations of mind-boggling complexity, far more than any human can even comprehend.
NOT SO. In actuality, when it computes, a computer can do one and only one thing: ADD. But it does this really, really fast. So you can’t see that when it looks like it is subtracting, dividing and multiplying, its still only just adding.
Huh? At their lowest levels, any electronic digital computing machine like a computer, or even a calculator, can only perform math calculations at the bit level (more about this below), in a binary environment (we’ll get to this shortly as well). But, still, all they do is ADD!
How does this work? Well, let’s take an example. Suppose you want to add 3 plus 4. That’s easy, the computer simply adds “3 + 4” and returns the answer: “7”.
But, then, how does a computer subtract? In that case, a computer still adds, but it adds a negative number. For example, let’s say you want to subtract 3 from 6. Here’s how a binary computer does this: It adds “6 + (-3)” and returns the answer: 3. [How does it accomplish this? Because a computer has the ability to use its electronic circuitry to rapidly convert a positive number to its negative counterpart through a process known as “Twos complement”. This is the negative of the original number, and it is done at the “bit level” (again, more about this below).]
Using this process, computers can also “simulate” multiplication and division as follows:
To multiply 6 times 4, the computer rapidly adds six together four times to come up with the answer of “24” (i.e. 6 + 6 + 6 + 6).
Similarly 24 divided by 6 would be calculated by adding the negative of 6 until it reaches 0, and then counts the number of times it took to reach 0. Like this: 24 + (-6) + (-6) + (-6) + (-6), then count 4 times the (-6) took to get to 0.
During this discussion, we have said that the fact that these numerical calculations are done at the “bit level,” which is what accounts for the rapidity of the process which “simulates” more than just addition. What do we mean by this?
First of all, consider that in binary systems like computer systems, there are only two choices for every digit, that is “0” and “1”. Computer codes are always shown as 0s and 1s (like the graphic at right) and that’s why. For more about this, see the discussion, Tying It All Together, A Primer On Electricity, Magnetism and Binary Computers.
So a computer can manipulate numbers and letters, as directed by programs, so long as the numbers and letters are in a digital binary format which it can recognize. Each character is expressed in a binary format. For example, The binary equivalent of the number 0 is “00000000,” 1 is “00000001,” 2 is “00000010,” 3 is “00000011,” 4 is “00000100” and so forth.
The most common and the lowest level common denominator for this binary digital format is the ASCII format. The American Standard Code for Information Interchange was created in 1963 by what later became known as the "American National Standards Institute" or "ANSI" (see Associations). It was based on the the set of symbols and characters already used in telegraphy at that time by the Bell company, was originally 127 characters and has since expanded to the “extended” table containing 255 characters, virtually any character necessary to write in the English language. Click HERE for the entire table. There is some overlap in the code (e.g. both the binary letter A and the binary number 65 are 010000001) but the computer can distinguish between the two through the program being used to process it.
So, lets look at a bit level calculation, as the computer sees it. Let’s keep it simple: Add 5 + 6. The computer would see it this way:
But wait a minute. In the example above, the third column from the right adds two “1”s, but it can’t create a “2” because this isn’t allowed in binary code(just 0s and 1s, remember?). What does it do, then? Just like addition in our standard base-ten system, it carries the extra “1” to the adjacent column to its left and puts it there. (if the carryover exceeds all 8 bits, the resulting 0 is discarded.) Click HERE for an explanation of different “base” systems, like base-10 and hexidecimal. And HERE to compare what source code, object code and digital machine language actually look like as the programming process continues.
So NOW you know how a computer actually “thinks”!