How do computers calculate sin values?

Typically high resolution sin(x) functions would be implemented with a CORDIC (COrdiate Rotation DIgital Computer) algorithm, which can be accomplished with a small number of iterations using only shifts and add/subtract and a small lookup table. The original paper The CORDIC Computing Technique by Jack Volder is from 1959. It also works nicely when implemented with hardware in an FPGA (and a similar algorithm would be implemented in a hardware FPU for those micros that have an FPU).

For lower resolution, for example to create synthesized sine wave for an inverter or motor VFD (Variable Frequency Drive), a lookup table (LUT) with or without interpolation works well. It's only necessary to store the values for one quadrant of the sine wave because of symmetry.

As @Temlib points out, the algorithms used inside modern FPUs use range reduction followed by an evaluation using something like the Remez algorithm to limit the maximum absolute error. More can be found in this Intel paper Formal verification of floating point trigonometric functions.


Most computer trig libraries are based on polynomial approximations, which gives the best balance between speed an accuracy. For example, a dozen or so multiplication and add/subtract operations is enough to give full single-precision accuracy for sine and cosine.