How can I multiply and divide using only bit shifting and adding?

The answer by Andrew Toulouse can be extended to division.

The division by integer constants is considered in details in the book "Hacker's Delight" by Henry S. Warren (ISBN 9780201914658).

The first idea for implementing division is to write the inverse value of the denominator in base two.

E.g., 1/3 = (base-2) 0.0101 0101 0101 0101 0101 0101 0101 0101 .....

So, a/3 = (a >> 2) + (a >> 4) + (a >> 6) + ... + (a >> 30) for 32-bit arithmetics.

By combining the terms in an obvious manner we can reduce the number of operations:

b = (a >> 2) + (a >> 4)

b += (b >> 4)

b += (b >> 8)

b += (b >> 16)

There are more exciting ways to calculate division and remainders.

EDIT1:

If the OP means multiplication and division of arbitrary numbers, not the division by a constant number, then this thread might be of use: https://stackoverflow.com/a/12699549/1182653

EDIT2:

One of the fastest ways to divide by integer constants is to exploit the modular arithmetics and Montgomery reduction: What's the fastest way to divide an integer by 3?


To multiply in terms of adding and shifting you want to decompose one of the numbers by powers of two, like so:

21 * 5 = 10101_2 * 101_2             (Initial step)
       = 10101_2 * (1 * 2^2  +  0 * 2^1  +  1 * 2^0)
       = 10101_2 * 2^2 + 10101_2 * 2^0 
       = 10101_2 << 2 + 10101_2 << 0 (Decomposed)
       = 10101_2 * 4 + 10101_2 * 1
       = 10101_2 * 5
       = 21 * 5                      (Same as initial expression)

(_2 means base 2)

As you can see, multiplication can be decomposed into adding and shifting and back again. This is also why multiplication takes longer than bit shifts or adding - it's O(n^2) rather than O(n) in the number of bits. Real computer systems (as opposed to theoretical computer systems) have a finite number of bits, so multiplication takes a constant multiple of time compared to addition and shifting. If I recall correctly, modern processors, if pipelined properly, can do multiplication just about as fast as addition, by messing with the utilization of the ALUs (arithmetic units) in the processor.