Correct way of solving the equation for simple harmonic motion

I think you're worrying too much. This is the correct approach (I'm going to be slightly flippant, so don't take this first paragraph too seriously on a first reading :) ):

  1. Step 1: Understand the meaning of the Picard-Lindelöf Theorem;
  2. Step 2: Understand that, by assigning state variables to all but the highest order derivative, you can rework $\ddot x +\omega^2\,x=0$ into a vector version of the standard form $\dot{\mathbf{u}} = f(\mathbf{u})$ addressed by the PL theorem and that, in this case, the $f(\mathbf{u})$ fulfills the conditions of the PL theorem (it is Lipschitz continuous)
  3. Step 3: Choose your favorite method for finding a solution to the DE and boundary conditions - tricks you learn in differential equations 101, trial and error stuffing guesses in and seeing what happens ..... anything! .... and then GO FOR IT!

Okay, that's a bit flippant, but the point is that you know from basic theoretical considerations there must be a solution and, however you solve the equation, if you can find a solution that fits the equation and boundary conditions, you simply must have the correct and only solution no matter how you deduce it.

In particular, the above theoretical considerations hold whether the variables are real or complex, so if you find a solution using complex variables and they fit the real boundary conditions, then the solution must be the same as the one that is to be found by sticking with real variable notation. Indeed, one can define the notions of $\sin$ and $\cos$ through the solutions of $\ddot x +\omega^2\,x=0$ and they have to be equivalent to complex exponential solutions through the PL theorem considerations above. You can then think of this enforced equivalence as the reason for your own beautifully worded insight that you have worked out for yourself:

"So using sin/cos and even is essentially equivalent so long as you allow for complex constants to provide a conversion factor between the two."

Drop the word "essentially" and you've got it all sorted!

Actually, let's go back to the Step 2 in my "tongue in cheek" (but altogether theoretically sound) answer as it shows us how to unite all of these approaches and bring in physics nicely. Break the equation up into a coupled pair of first order equations by writing:

$$\dot{x} = \omega\,v;\, \dot{v} = -\omega\,x$$

and now we can write things succinctly as a matrix equation:

$$\dot{X} = -i\,\omega \, X;\quad i\stackrel{def}{=}\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\text{ and } X = \left(\begin{array}{c}x\\v\end{array}\right)\tag{1}$$

whose unique solution is the matrix equation $X = \exp(-i\,\omega\,t)\,X(0)$. Here $\exp$ is the matrix exponential. Note also, that as a real co-efficient matrix, $i^2=-\mathrm{id}$. Now, you may know that one perfectly good way to represent complex numbers is the following: the field $(\mathbb{C},\,+,\,\ast)$ is isomorphic to the commutative field of matrices of the form:

$$\left(\begin{array}{cc}x&-y\\y&x\end{array}\right);\quad x,\,y\in\mathbb{R}\tag{2}$$

together with matrix multiplication and addition. For matrices of this special form, matrix multiplication is commutative (although of course it is not generally so) and the isomorphism is exhibited by the bijection

$$z\in\mathbb{C}\;\leftrightarrow\,\left(\begin{array}{cc}\mathrm{Re}(z)&-\mathrm{Im}(z)\\\mathrm{Im}(z)&\mathrm{Re}(z)\end{array}\right)\tag{3}$$

So if, now, we let $Z$ be a $2\times2$ matrix of this form, then we we can solve (1) by mapping the state vector $X = \left(\begin{array}{c}x\\v\end{array}\right)$ bijectively to the $2\times 2$ matrix $Z = \left(\begin{array}{cc}x&-v\\v&x\end{array}\right)$, solving the equation $\dot{Z} = -i\,\omega\,Z$, i.e. $Z(t) = \exp(-i\,\omega\,t)\,Z(0)$, where $Z(0)$ is the $2\times 2$ matrix of the form (2) with the correct values of $x(0)$ and $v(0)$ that fulfill the boundary conditions, and then taking only the first column of the resulting $2\times 2$ matrix solution $Z(t)$ to get $X(t)$.

This is precisely equivalent to the complex notation method you have been using, as I hope you will see if you explore the above a little. The phase angles are encoded by the phase of the $2\times2$ matrix $Z$, thought of as a complex number by the isomorphism described above.

Moreover, there is some lovely physics here. Consider the square norm of the state vector $X$; it is $E = \frac{1}{2}\,\langle X,\,X\rangle = \frac{1}{2}(x^2 + v^2)$ and you can immediate deduce from (1) that

$$\dot{E} = \langle X,\,\dot{X}\rangle = X^T\,\dot{X} = -\omega\,X^T \,i\, X = 0\tag{4}$$

This has two interpretations. Firstly, $E$ is the total energy of the system, partitioned into potential energy $\frac{1}{2}\,x^2$ and kinetic $\frac{1}{2}\,v^2$. Secondly, (4) shows that the state vector, written as Cartesian components, follows the circle $x^2+v^2=2\,E$ and indeed this motion is uniform circular motion of $\omega$ radians per unit time. So that simple harmonic motion is the motion of any Cartesian component of uniform circular motion.

You could also solve the problem by beginning with (1), deducing (4) and then make the substition

$$x=\sqrt{2\,E}\,\cos(\theta(t));\quad\, v=\sqrt{2\,E}\,\sin(\theta(t))\tag{5}$$

which is validated by the conservation law $x^2+v^2=2\,E$ with $\dot{E}=0$. Then substitute $x$ back into the original SHM equation to deduce that

$$\theta(t) = \pm\omega\,t+\theta(0)\tag{6}$$


The simplest insight into a differential equation like $$ \ddot{x}+ a\dot{x}=- b x $$ is to note that the left hand side tells you that derivatives of the function $x(t)$ must be a multiple of this function since the right hand side is proportional to $x(t)$.

What then are the (simple) functions which give back multiple of themselves under differentiation? There is only $Ae^{\lambda t}$ (with $\lambda$ and $A\ne 0$ constants) or the trivial function $x=$constant. No other function returns a multiple of itself under differentiation.

It follows that $x$ must be of the form $x=Ae^{\lambda t}$. Inserting this into the differential equation transform the problem from finding the function to finding the constants $\lambda$ and $A$.

Indeed we get the auxiiary equation $\lambda^2+a\lambda=-b$ with the factor $A$ cancelling out. $\lambda$ can be then be found by solving the quadratic. As there are (usually) two roots $\lambda_\pm$ the general solution will be $$ x(t)=A_+e^{\lambda_+ t}+A_-e^{\lambda_-t}\, .$$ Because the position $x(t)$ is a real number, it must be that the constant $A_\pm$, which are determined by the initial conditions of the problem, combine to give real functions; if the roots $\lambda_\pm$ have imaginary parts it must be that, upon using Euler's theorem, $x(t)$ takes the form $$ x(t)=e^{-\gamma t}\left(B_+\cos(\omega t)+B_-\sin(\omega t)\right) $$ Here $-\gamma$ is the real part and $\omega$ is the imaginary part of $\lambda_\pm=-\gamma\pm i\omega$.

The same argument applies when $a=0$: what functions are such that their second derivatives are multiple of themselves? Again $e^{\pm\lambda t}$ but now also $\sin(\omega t)$ and $\cos(\omega)$. The former transforms into a positive multiple of itself, and the latter two into negative multiples of themselves. Thus, you can specialize the form of the solution from the sign of $b$ in the differential equation. For pure harmonic motion, $b=\sqrt{\omega}>0$.

Thus as you guessed there is redundancy in the methods. A bit of insight into how functions behave under differention, combined with some physical requirement, are enough to rapidly isolate the form of the solution, with unknown coefficients to be matched from your specific problem.


Since all three are more or less equivalent use whichever you see fit; however some are more useful than others depending on the situation at hand.

Using complex exponentials and then taking the real part at the end is useful for when you are solving more complicated problems for example in forced simple harmonic oscillations with damping:

Forced oscillator

$$\ddot x +\gamma \dot x+\omega_0^2x=\frac{F_0}{m}\cos(\omega t)$$

We seek a steady state solution. In the complex plane, the equation of motion is

$$\ddot{\widetilde{x}} +\gamma \dot{\widetilde{x}} +\omega_0^2{\widetilde{x}}=\frac{F_0}{m}e^{i\omega t}$$

We solve by substituting in the trial solution $${\widetilde{x}}=Ae^{i(\omega t + \phi)}$$

which is a complex solution, and we would wait right until the end (after all the differentiation and substitutions are made) to take the real part to get the physical solution.


But for more simple situations like $$\ddot x +\omega ^2x=0$$

It is absolutely fine to use method 1: $$x=A\cos(\omega t)+B\sin(\omega t)$$