Why is dipole the simplest source in electrodynamics?

The smallest radiating unit is an accelerating dipole moment. That can of course be produced by an accelerated single charge, which can be made equivalent to an oscillating dipole. $$ \ddot{p} = q\ddot{r},$$ where $r$ is a displacement of the charge around some fiducial point.

You don't get a radiation field unless the charged particle is accelerating and because of this, the radiation "source" has to have a finite size. For a sinusoidal oscillation of acceleration amplitude ${a_0}$, where $\ddot{r}= a_0\sin \omega t$,then that size is $a_0/\omega^2$.


Simple reason: an oscillating monopole field in a region isolated from currents would violate charge conservation. Note a monopole field is not the same as an oscillating monopole charge, which, as Rob Jeffries's answer discusses, actually produces a dipolar field.

Let $(r,\,\theta,\phi)$ be the standard spherical co-ordinates, with corresponding orthonormal basis $(\boldsymbol{\hat{r}},\boldsymbol{\hat{\theta}},\boldsymbol{\hat{\phi}})$, each pointing along the direction of increasing respective co-ordinate.

Then a monopolar electric field would have the functional form:

$$\mathbf{E} = f(r,\,t)\,\boldsymbol{\hat{n}}(r,\,t)$$

where the magnitude $f$ and direction $\boldsymbol{\hat{n}}$ depend only on $r$ and time $t$.

Firstly, the Hairy Ball Theorem; see e.g.:

Tyler Jarvis and James Tanton, "The Hairy Ball Theorem via Sperner's Lemma", Amer. Math. Monthly, 111, #7, pp599-603, 2004

forbids any $\theta$- and $\phi$-independent vector field $\boldsymbol{\hat{n}}$ with $\boldsymbol{\hat{\theta}},\boldsymbol{\hat{\phi}}$ components. So we know that our monopole field must be of the form:

$$\mathbf{E} = f(r,\,t)\,\boldsymbol{\hat{r}}$$

But now calculate the flux of $\mathbf{E}$ through the sphere $r=r_0$. The answer, and its implied contained charge by Gauss's law, are:

$$\Phi_E = 4\,\pi\,r_0^2\, f(r,\,t) = \frac{Q}{\epsilon_0}$$

which violates charge conservation unless the charge does not vary with time or unless there are radially directed currents at all values of $r$ (which is not what we mean when we talk about a radiating monopole). So the only possible monopolar field in a dielectric medium is an electrostatic one - that from a single, isolated charge, or a spherically symmetric central distribution of charge.

If you want to talk about the farfield alone, then the Hairy Ball Theorem alone rules out tangential monopolar electric and magnetic fields that are locally like plane waves. There has to be some $\theta$ and $\phi$ dependence.

This answer generalizes Rob Jeffries' Answer, which begins by considering the simplest motions of an isolated charge, i.e. charge conservation is fulfilled in his answer by construction.


You are correct that in electrodynamics the only real sources of radiation are non-uniformly moving charges. However, when you solve for the potentials, you get some intricate expressions, the so-called Liénard-Wiechert potentials, for which the fields become very complicated expressions when calculated from them. Moreover, decomposing an arbitrary system with given charge and current densities into various moving point charges becomes even more complicated. The difficulties that arise from them is that in order to calculate the vector potential at point $r$ and at time $t$, you have to integrate over the specific retarded positions of sources at all past times. So it is not enough to know e.g. the positions and velocities of the charges at the current time (as it is sufficient in classical mechanics). In essence this procedure takes into account all sources that are on the (past time half of the) light cone.

Nevertheless, one could start with them. We will follow here essentially the discussion of J. D. Jackson, Classical Electrodynamics [Chapter 9 in the 3rd edition].

The retarded vector potential reads (leaving away almost all constants) $$ \mathbf A(\mathbf r,t) = \int \mathrm d^3r' \int \mathrm dt' \frac{\mathbf j(\mathbf r',t')}{|\mathbf{r-r'}|} \delta(t'-t+|\mathbf{r-r'}|/c), $$ where the $\delta$-function is taking care of the mentioned integration along the light cone, where the choice of the signs in the argument ensure the causality of the solution.

For given current and charge distributions, one can then calculate in principle the fields. Now, one assumes that the sources have a certain time-dependence (e.g. $\mathrm e^{-\mathrm i \omega t}$) and that they are restricted to a small area in space. Small means here that one can associate a wavelength $\lambda$ to the time-dependence, $\lambda =2 \pi c /\omega$, and that the source dimensions $d$ are much smaller than this wavelength, $d \ll \lambda$.

This leads to three different spatial regions:

  1. near-field zone: $d \ll r \ll \lambda$,
  2. intermediate zone: $d \ll \lambda \approx r$,
  3. far-field zone: $d \ll \lambda \ll r$.

It turns out that the fields have different properties in the three regions. Of interest to the question here is only the last regime, where the dimensions of the source can be neglected (assuming one does not care about a precision that would distinguish the effects of the field at point $\mathbf r$ and a point located a distance smaller than $d$ nearby of $\mathbf r$; that's what the assumption $d \ll \lambda$ is for).

We can then approximate an oscillating moving point charge to be described by a current density located at a single point $\mathbf r_0$ with harmonic time dependence, in the form $$ \mathbf j (\mathbf r,t) = \mathbf j (\mathbf r) \mathrm e^{-\mathrm i \omega t} = \mathbf j_0 \delta(\mathbf{r-r_0}) \mathrm e^{-\mathrm i \omega t} . $$ The fact that the current density of a point charge can be written as a $\delta$-function is not important, we could also cave some extended current $\mathbf j(\mathbf r)$ as long as it is sufficiently small according to the above considerations. The vector potential then becomes upon evaluating the $t$ integration and the $\delta$-function, $$ \mathbf A(\mathbf r,t) = \int \mathrm d^3r' \frac{\mathbf j(\mathbf r')}{|\mathbf{r-r'}|} \mathrm e^{\mathrm i k |\mathbf{r-r'}|} e^{-\mathrm i \omega t}, $$ with $k=\omega/c$.

A second approximation is that by expanding the distance vector in the expontantial by $$ |\mathbf{r-r'}| \approx r + \mathbf n \cdot \mathbf r' , $$ while for the inverse distance in powers of $1/r$ (usual multipole expansion), $$ \frac{1}{|\mathbf{r-r'}|} = \frac{1}{r} + \frac{\mathbf r \cdot \mathbf r'}{r^3} + \dotsb, $$ we keep only the lowest order. This may sound strange to keep different orders in the exponent and in the denominator; in the exponent, we have the phase information which has a variation in the order of $\lambda$, while the approximation is in the order of $d$; in the series of the denominator, we have terms $\sim 1/r^2$, which can be neglected in comparison to term $\sim 1/r$ for $r \rightarrow \infty$.

We obtain a vector potential of the form $$ \lim_{r\rightarrow\infty} \mathbf A(\mathbf r,t) = \frac{\mathrm e^{\mathrm i k r}}{r} e^{-\mathrm i \omega t} \int \mathrm d^3r' \mathbf j(\mathbf r') \mathrm e^{-\mathrm i k \mathbf{n \cdot r'}} , $$ i.e. the vector potential behaves like spherical waves, which give transverse waves for the fields. You can expand the exponential, $$ \lim_{r\rightarrow\infty} \mathbf A(\mathbf r,t) = \frac{\mathrm e^{\mathrm i k r}}{r} e^{-\mathrm i \omega t} \sum_n \frac{(-\mathrm i)^n}{n!} \int \mathrm d^3r' \mathbf j(\mathbf r') (k \mathbf{n \cdot r'})^n. $$

$\mathbf{n \cdot r'}$ is in the order of $d$, thus $kd \ll 1$; this means that all higher order terms get smaller with higher $n$ (mind the $1/n!$ factor), so that the first non-zero term is the dominant contribution.

So if we keep only the first term in this expansion, we have $$ \lim_{r\rightarrow\infty} \mathbf A(\mathbf r,t) = \frac{\mathrm e^{\mathrm i k r}}{r} e^{-\mathrm i \omega t} \int \mathrm d^3r' \mathbf j(\mathbf r') . $$ Using the continuity equation, $$ - \mathrm i \omega \rho + \nabla \cdot \mathbf j =0, $$ and integration by parts for each coordinate separately, $$ \int \mathrm d^3r' \mathbf j = - \int \mathrm d^3r' \mathbf r' (\nabla \cdot \mathbf j ) = - \mathrm i \omega \int \mathrm d^3r' \mathbf r' \rho (\mathbf r') = - \mathrm i \omega \mathbf p , $$ where $\mathbf p = \int \mathrm d^3r' \mathbf r' \rho (\mathbf r')$ is the definition of the dipole moment.

Therefore, the vector potential is of the form $$ \lim_{r\rightarrow\infty} \mathbf A(\mathbf r,t) = - \mathrm i \omega\frac{\mathrm e^{\mathrm i k r}}{r} e^{-\mathrm i \omega t} \mathbf p , $$ from which the fields can be calculated. The electric field calculated such is exactly of the form of the field of an ideal dipole (but oscillating in time).

We have also to analyze the retarded scalar potential: $$ \phi(\mathbf r,t) = \int \mathrm d^3r' \int \mathrm dt' \frac{\rho(\mathbf r',t')}{|\mathbf{r-r'}|} \delta(t'-t+|\mathbf{r-r'}|/c). $$ The monopole contribution is obtained by substituting $|\mathbf{r-r'}|$ simply by $r$, such that $$ \phi_{\mathrm m}(\mathbf r,t) = q (t-r/c)/r , $$ where $q(t)$ is the total charge as function of time. But this is, as already answered, conserved, so that the scalar potential is necessarily static.

Summary: The fields of a moving charge can be approximated in the far field by the field of a dipole.


Alternatively, one could look at a more feasible way of calculating the radiation from arbitrary sources.

The statement in the notes of the first link you've provided is made in the context of the formalism of Green's functions, which is discussed in section 6.1. To remind us, the Green's function is the solution of an inhomogeneous linear differential equation (for given boundary conditions) where the source (= the inhomogeneous part) is a $\delta$-function. Any other inhomogenous solution can then be obtained easily via the superposition principle by integrating (i.e. summing up) the Green's function multiplied by the inhomogenous solution.

So the author is looking for a Green's function for the solution of Maxwell's equations, which, being in the Lorentz gauge, are transformed via separation of variables (the time variable is separated and thus it does not appear in the following) into two Helmholtz equations, one for the vector potential and one for the scalar potential, (leaving away all the constants) $$ [\nabla^2 + k^2] \phi (\mathbf{r}) = - \rho(\mathbf{r}) ,$$ $$ [\nabla^2 + k^2] \mathbf A (\mathbf{r}) = - \mathbf j(\mathbf{r}) .$$ The inhomogeneous part of these Helmholtz equations are the sources, i.e. the charge and current densities.

So for the Green's functions we are looking for the solution of these inhomogeneous linear differential equations in free space where the charge and current densities are described by delta functions, that is equations of the form $$ [\nabla^2 + k^2] G (\mathbf{r}, \mathbf r') = - \delta(\mathbf{r-r'}) .$$

Now comes the crucial point: The author expresses in Eq. 6.20 the current density $\mathbf j(\mathbf r,t)$ of a time-dependent dipole with the help of the delta function, $$ \mathbf j (\mathbf r, t) = \delta(\mathbf{r-r_0}) \frac{\partial}{\partial t} \mathbf p(t), $$ where $\mathbf p(t)$ is the time-dependent dipole moment, and $\mathbf r_0$ is the location of the dipole. So the Green's function can be re-expressed (between Eq. 6.24 and 6.24) via the mathematical dipole moment.

Please note that there is a difference between a physical dipole, which are two actual charges separated by a finite distance, and an ideal dipole, which is infinitesimally small and thus located at a single point!

For the formalism of Green' functions it is not desirable to have 'accelerating' (or 'moving' for that matter) delta functions as a source. How would you solve that? Probably it is possible, but it would make things just unnecessary harder - you could not carry out the separation of variables to arrive at the Helmholtz equations. So you restrict yourself to stationary charge and current densities which excludes the accelerating charge. Your left with time-dependent but stationary current and charge densities (to be more precise: the time-dependence and the spatial dependence should be such that they can be written as independent factors, like $ \rho(\mathbf r,t) = \rho_0(\mathbf r) R(t)$); a time-dependent charge density would lead just to a retarded scalar electric potential (and it would violate charge conservation as mentioned in the other answer), which would give a fluctuating electric field that is not an electromagnetic wave. The time-dependent current density, however, can give rise to electromagnetic waves provided the temporal dependence is of the right form (i.e. harmonic oscillation).

As the current density of an ideal dipole takes the form of an delta function, we can identify the ideal dipole as the source for the Green's function of electromagnetic radiation in free space.

Moreover, as you can now decompose an arbitrary current density into a continuous distribution of ideal dipoles, you can calculate the radiation pattern of any distribution of currents by simply integrating the source functions multiplied by the Green's function.


You can compare that with the Green's function in electrostatics. There, you have the Poisson equation, (leaving away constants), $$ \nabla^2 \phi (\mathbf r) = - \rho (\mathbf r) . $$

The Green's function $G(\mathbf{r,r'})$ would be the solution to the equation $$ \nabla^2 G (\mathbf{ r,r'}) = - \delta(\mathbf{ r-r'}) ,$$ for which the solution in free space (boundary conditions change the Green's function) is $$ G (\mathbf{ r,r'}) = \frac{1}{|\mathbf{r-r'}|} .$$ You then compare that with the potential of a single point charge, and you find that they have the same functional dependence. So you identify the point charges as the basic source of the electrostatic field.


In summary: Considering the fact that Green's function of the Helmholtz equation can be re-expressed via the current density of an ideal dipole makes it plausible to identify the ideal dipole as the basic element of electromagnetic radiation.