Why is impedance represented as a complex number rather than a vector?

Complex numbers are similar to vectors, but have some extra mathematical properties that make them useful. Most notably, using the complex exponential \$e^{j\omega t}\$ instead of sines and cosines makes differential equations much easier to deal with. That's how you get to complex impedance in the first place:

$$v(t) = A\mathrm e^{\mathrm{j} \omega t + \theta}$$ $$i(t) = B \mathrm e^{\mathrm j \omega t + \phi}$$ $$\frac {v(t)} {i(t)} = Z = \frac A B \mathrm e ^ {\mathrm j (\theta - \phi)}$$

Or, in phasor notation:

$$\hat V = A\angle \theta$$ $$\hat I = B\angle \phi$$ $$\frac {\hat V} {\hat I} = Z = \frac A B \angle (\theta - \phi)$$

You could use something like vector notation for the magnitude and phase, but vectors don't multiply and divide like complex numbers do, so it wouldn't improve anything.

EDIT: Complex numbers developed to solve certain algebra problems. If you want to know more about the history, check out the first chapter of Visual Complex Analysis by Tristan Needham. (You can read the preview on Amazon if you don't have a good library handy.)

The second chapter of the book can probably answer your question by itself, but I'll give it a shot too. Complex numbers are, in a sense, two-dimensional quantities, but what makes them useful here is that they also include the concept of rotation. Multiplication by \$\sqrt{-1}\$ is equivalent to a 90° rotation in a 2D plane:

$$\mathrm i ^ 0 = 1$$ $$\mathrm i ^ 1 = \mathrm i$$ $$\mathrm i ^ 2 = -1$$ $$\mathrm i ^ 3 = -\mathrm i$$ $$\mathrm i ^ 4 = 1$$

We can expand on this with complex exponentials, with let us represent a rotation by any amount:

$$\mathrm e^{j\pi/4} \cdot\mathrm e^{j\pi/4} = \mathrm e^{j(\pi/4 + \pi/4)} = \mathrm e ^ {j\pi/2} = \mathrm i$$ $$45^\circ + 45^\circ = 90^\circ$$

Notice that we get this by doing normal arithmetic -- multiplying real-valued exponentials works the same way.

Why does that matter? We can already represent rotations with sines and cosines, right? But that gets nasty in differential equations, mainly because you can't combine sines and cosines by adding them. On the other hand, the derivative of \$\mathrm e^x\$ is... itself. No trouble there!

So where does impedance come in? Well, think about the difference between DC and the sinusoidal steady state. At DC, node voltages are constant values with different magnitudes. At AC, node voltages are sinusoidal with the same frequency but different magnitudes and phase angles. The voltage/current relationships change too. With a resistor, voltage and current are in phase. In an inductor or a capacitor, there's a 90° phase difference between them.

So now the concept of rotation (phase "angle") has crept into our circuit analysis. We could stay in the time domain and do stuff like this:

$$v = L \frac {\mathrm d i} {\mathrm d t}$$ $$V\cos(\omega t) = \omega L\cdot I\cos(\omega t - 90^\circ)$$

Or we use could complex numbers, where a \$90^\circ\$ rotation just means multiplying by i (well, \$j\$ in our case -- this is EE):

$$V\mathrm e^{\mathrm j \omega t} = \mathrm j\omega L \cdot I \mathrm e^{\mathrm j \omega t}$$

The key benefit here is that all of the \$\mathrm e^{\mathrm j \omega t}\$ terms cancel out of equations, so now our voltage/current relationship is just Ohm's Law with complex numbers:

$$\hat V = \mathrm j \omega L \hat I$$

If I had to sum all this up in one sentence, I would say that complex numbers let you represent rotation by grouping the magnitude and phase together separate from the frequency, while sinusoids group the frequency and phase together.


Why are complex numbers used and not Vectors?

simply because there is no vector division defined in vector algebra, so simply you cannot use Ohm's law in division form, thereby making calculations more complicated. On the other hand the domain of complex number athematic has more progressed over time than vector counterpart, so you have many theorems at your disposal to simply your expression and (easily) carry out analysis. So, even though you could work around with vector algebra, it is easier to work with complex number.

read more: https://math.stackexchange.com/questions/246594/what-is-vector-division

why impedance are represented as complex number?

consider the following circuit: enter image description here

if Q is the charge on the capacitor, and i is the current, then using KVL we will have

$$R\times i + \frac QC + L\times \frac{di}{dt} = V \dots(1)$$ $$ \implies \frac{d^2i}{dt^2} + \frac RL\times \frac{dQ}{dt} + \frac 1{LC}\times i = 0\dots (2)$$ $$\implies i = Ae^{a_1t}+Be^{a^2t}$$ where $$a_1, a_2 \in C$$ and general solutions of 2nd order Differential equation are always complex in nature.

hence, your i is complex expression and putting this value in eq 1 will gives V which will also be a complex expression. On Dividing V by i, you will get another complex expression which we call impedance of this circuit. So you see, the reason why an impedance is complex is because of the mathematics involved.

Now, if you want to have a "feel" of complex impedance, you should learn about phasors and have an analogy with that.

Read More: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-007-electromagnetic-energy-from-motors-to-lasers-spring-2011/lecture-notes/MIT6_007S11_lec19.pdf


Just to remark on that you can represent impedance as a matrix:

$$ R + \mathrm j X \leftrightarrow \begin{bmatrix} R & X \\ -X & R \end{bmatrix} $$

This is in fact the matrix representation of complex numbers. On the other hand you can represent sinusoidal signals (but not impedance) using vectors:

$$ x_{\cos} + \mathrm j x_{\sin} \leftrightarrow \begin{bmatrix} x_{\cos} \\ x_{\sin} \end{bmatrix} $$

Addition/subtraction/scaling of impedance and sinusoids are obviously just the homonymous operations on matrices and vectors. Admittance is the matrix inverse of impedance:

$$ (R + \mathrm j X)^{-1} \leftrightarrow \begin{bmatrix} R & X \\ -X & R \end{bmatrix}^{-1} = \frac 1 {(R^2 + X^2)} \begin{bmatrix} R & -X \\ X & R \end{bmatrix} $$

You can matrix-multiply impedance with current, or admittance with voltage:

\begin{align} \begin{bmatrix} R & X \\ -X & R \end{bmatrix} \begin{bmatrix} i_{\cos} \\ i_{\sin} \end{bmatrix} &= \begin{bmatrix} R i_{\cos} + X i_{\sin} \\ R i_{\sin} - X i_{\cos} \end{bmatrix} \\ \begin{bmatrix} G & B \\ -B & G \end{bmatrix} \begin{bmatrix} u_{\cos} \\ u_{\sin} \end{bmatrix} &= \begin{bmatrix} G u_{\cos} + B u_{\sin} \\ G u_{\sin} - B u_{\cos} \end{bmatrix} \end{align}

Phase difference is also a matrix:

$$ {\mathrm e}^{\mathrm j \varphi} = \cos \varphi + \mathrm j \sin \varphi \leftrightarrow \begin{bmatrix} \cos \varphi & \sin \varphi \\ -\sin \varphi & \cos \varphi \end{bmatrix} $$

Derivative is simply \$ \omega \$ times a 90-degree phase lead:

$$ \mathrm j \omega \leftrightarrow \begin{bmatrix} 0 & \omega \\ -\omega & 0 \end{bmatrix} $$

With what we have got ourselves so far we can write differential equations as matrix equations

\begin{align} U_0 \cos {\omega t} = u + R C \frac {\mathrm d u} {\mathrm d t} \leftrightarrow \begin{bmatrix} U_0 \\ 0 \end{bmatrix} = (\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + R C \begin{bmatrix} 0 & \omega \\ -\omega & 0 \end{bmatrix}) \mathbf u = \begin{bmatrix} 1 & R C \omega \\ -R C \omega & 1 \end{bmatrix} \mathbf u \end{align}

... and solve it by calculating the inverse matrix of \$ \begin{bmatrix} 1 & R C \omega \\ -R C \omega & 1 \end{bmatrix} \$ and then multiply it onto the \$ U_0 \$ vector.


As you can see though, this system of notation is quite verbose, and doesn't provide an intuitive representation of phase and amplitude (everything is in Cartesian coordinates essentially).

BTW, power has a neat representation as the vector dot product:

$$ \frac 1 2 (u_{\cos} i_{\cos} + u_{\sin} i_{\sin}) = \frac 1 2 {\mathbf i}^{\mathrm T} \mathbf u = \frac 1 2 \begin{bmatrix} i_{\cos} & i_{\sin} \end{bmatrix} \begin{bmatrix} u_{\cos} \\ u_{\sin} \end{bmatrix} $$

Tags:

Math

Impedance