What is the physical significance of skewness and kurtosis of an eletrical signal?

Those words which are used to describe statistical distributions - how they differ from the Gaussian bell curve - are not common when one wants to describe with words how a pulse looks in an oscilloscope. That's because elecric circuits can have even much more complex forms of voltages vs. time, there's no "normal pulse form".

If you have a noise signal and you are not interested in its exact voltage vs. time form, but how the voltage values are statistically distributed you, of course, can have originally a Gaussian distribution, but some nonlinear - say diode or transistor - circuit can distort the signal so that the distribution curve has skew. The nolinearity simply multiplies the voltage with a number which depends in a regular way on the voltage.

An example:

In an audio amplifier the negative side of the push-pull output stage has become quiet. It doesn't prevent the positive side working. A Gaussian noise input signal comes out so that all negative values are clipped to zero. Quite a bad skew!

Kurtosis can as well be caused by nonlinearity or there's a fault or interfering signal which occurs as not so usual values time after time so that the distibution pattern becomes wider or narrower than what belongs to the expected noise.

Examples of kurtosis:

  1. An audio system outputs Gaussian noise because there's no actual signal, but the non-ideal components add some Gaussian noise. Then there's some strong interfering sparking device nearby which causes randomly strong sharp peaks to the audio circuit. Every spark creates a loud snap to the output. It stretches the tails of the distribution.

  2. A bad connection which occurs randomly can lift near zero values more common than what belongs to the noise. But as well it can cause the same as an interfering impulse noise. The effect of a bad connection depends radically on its place in the circuit.

Statistical analysis is a cornerstone of the math behind successful communication system designs. Noise distribution analysis is not a common fault searching method because deterministic test signals generally are available. Noise distribution analysis can be useful to find what causes the noise which in electronics is generally something unwanted or how to prevent the noise causing too much communication errors.


That depends very highly on the specific conditions or application you look at. I don't know of any general rule of how to interpret these statistical parameters in regard to electrical measurements, you really need context to do this.

I can only give you one example from my area of work:
When you measure the sEMG (very small voltages on the surface of your skin due to electrical activity of the muscles beneath) you typically would calculate statistical parameters of these highly indeterministic signals. Classical techniques often only determine the mean value of the EMG to estimate the corresponding strength of muscle contraction. More modern approaches try to extract more information out of the signal by calculating additional parameters with a whole set of electrodes.
It was shown that the Kurtosis also corresponds to the strength of muscle contraction [1] and can be used as an additional feature for doing pattern recognition to determine the movement the patient is doing.
However it is questionable how useful this really is, because redunancy to the mere mean or RMS value is pretty high and the Kurtosis does not yield a lot of additional information from the raw EMG signal.

This is the only application of calculating the Kurtosis on electrical quantities like the voltage I ever encountered. But I think this is actually quite an interesting question and maybe other people can share more examples.

[1]: S. Krishnan, R. Akash, D. Kumar, et al., “Finger movement pattern recognition from surface EMG signals using machine learning algorithms,” in Proceedings of the International Conference on Translational Medicine and Imaging 2017, B. Gulyás, P. Padmanabhan, A. Fred, et al., Hrsg. Singapore: Springer, 2019, Kap. 7, p. 75–89.


I'm being very handwavy, but...

Interpreting the skewness or kurtosis of a distribution isn't actually so straight-forward, even in the setting of plain old probability theory.

What you need to realize is that these "higher-order moments" are "just" (i.e., modulo some details, scaling, etc) the coefficients of the Maclaurin expansion of the Fourier transform of your signal. So the moments are really a lot like derivatives and higher derivatives in calculus, telling you how the energy is spread around the frequency domain, and how fast energy levels change "at a point" in the frequency domain. This is why the standard deviation is the RMS power.

The bigger these "higher order moments" are, the more "often" rare large events will have. Skewness, because it carries a sign, "broadly" tells you how often you might see a large positive or negative deviation from the mean, and the sign tells you which direction these "skew" towards. But it's a relatively weak relationship.

Kurtosis is even harder to reason about. It measures the average of the fourth power of the deviation from the mean. This quantity does not even carry sign information, but represents something like the standard deviation of the "instantaneous" standard deviation.