Now, I mentioned that binary was important, and the reason for this is that its the base with the fewest digits, making it the simplest way to represent a number. What’s more, we can use the fact that binary only requires two digits to associate each digit with one of two states, say true and false, or high voltage and low voltage. This is one of the reasons why binary is particularly good for computers; numbers can be processed by the computer by just switching from a low voltage to a higher one.

Another reason why binary is so great is that it greatly simplifies many mathematical operations. Addition is not improved by much when done by hand (though a circuit performing base 2 addition is far simpler than one in base 10), being performed in the same way in any base, although there exist algorithms for multiplication and division that are more efficient when dealing with a binary representation, as well as for many other more advanced operations. How for example, would we compute 0b1010 + 0b1010?

010101 +
1 1 1

I’m assuming you’re familiar with the usual layout I’ve used above, but note that here, because we can only use 0 and 1, the sum 1 + 1 is 0 and we carry the 1, instead of obtaining 2 as in decimal and not carrying anything. Also note that the result is exactly the same as the operands, but with all the digits shifted one to the left. With a little bit of thought, we can see why this is the case. Consider, in decimal, the equation 5 * 10 = 50. Multiplying by 10 shifts the operand (5) one digit to the left, and adds a 0 to the end, but multiplying by 10 is the same as adding the operand to itself ten times. Analogously, adding a binary number to itself is equivalent to multiplying it by 2, which similarly is a shift to the left by one digit. Multiplying by 2 is a much more useful operation than multiplying by 10, so much so that the act of shifting a number a certain number of places to the left (or right) is its own operation, called bit shifting. C++ supports this using the << and >> operators respectively. (These are not the stream operators, which are overloads of the bit shift operators for non-integral types.)

For example, say we wish to find 104 * 256. Certainly we can simply calculate this using the normal method of multiplication, but we can also do this

104 << 8

256 = 2^8 and so multiplying by 256 is the same as multiplying by 2 eight times. Of course converting to binary, shifting the number, then converting it back to decimal is unlikely to be faster by hand, but since with a computer the numbers are already stored in binary this operation can be significantly faster. This is more of a theoretical point than a practical one however, in most cases the speedup isn’t significant compared to the rest of the program and any modern compiler will recognise that 104 * 256 = 104 << 8 and will make the optimisation for you automatically. This doesn’t mean bit shifting isn’t useful though, it can be very handy when dealing with non-text data, especially images. (Where bitshifting and some other operators we’re about to meet allow you to extract the red, green, and blue components of the colour.)

As I mentioned earlier, we can associate each digit of a binary number with either true (1) or false (0). This allows us to define analogues of the logical and, or, and not operators for numbers. These operators are defined bitwise, that is they apply to each bit in sequence. This’ll make more sense when we look at examples.

Consider the usual logical and, the && operator. The expression a && b evaluates to true if both a and b are true statements, and evaluates to b otherwise. Similarly, given two bits the bitwise and operator, & returns 1 if both bits are 1 and returns 0 otherwise. The & operator performs this on each corresponding bit in sequence, so for example the expression 0b10101 & 0b10110 evaluates to 0b10100.

We then have the bitwise or and bitwise not operators, denoted | and ~ respectively, which act much as you would expect:

0b10101 | 0b10110 = 0b10111
~0b10101 = 0b01010

There’s one more bitwise operation that’s commonly used, the so called exclusive or or xor operator, denoted ^. Whereas the standard (inclusive) or returns true if at least one of the bits are true, the ^ operator only returns true if exactly one of the bits is. For example, 0b10101 ^ 0b10110 = 0b00011. Note however that if we perform the xor again with the same operand 0b10101 ^ 0b00011 = 0b10101 we get back the same number we started with! This property makes the xor operator very useful for encryption, as we can take a message, xor it with a secret key, and transmit the encrypted message, and the other person only needs to xor it with the same key to get back the original input. If the key is sufficiently long, this becomes very hard to crack despite its simplicity!

If you’ve been running these examples in C++ whilst you’ve been reading this then I apologise for the all the compiler errors you’re probably having; sadly, whilst C++ has native support for octal literals (numbers written in octal) as of the C++11 standard it does not support binary literals. The upcoming C++14 standard does however, so if you’re lucky and your compiler has already implemented this part of the new C++ specification then all this code will run fine, as the 0b prefix is part of the new standard (and is used in several other languages too, such as Ruby and Java).