What causes blurriness in an optical system?

To add some details to Eoin's answer.

Your description of imaging as a mapping is a good one to begin with and it will get yoi a long way. However, even in ray optics (see my answer here for more info), i.e. when the propagation of light can be approximated by the Eikonal equation (see here for more info), the mapping of points one-to-one between the object and image plane as you describe can only happen in very special conditions. In general, a bundle of rays diverging from one point will not converge to a point after passing through an imaging system made of refracting lenses and mirrors. One has to design the system so that the convergence is well approximated by convergence to a point. As Eoin said, this non-convergence is the Ray theory description of aberration: spherical, comatic, astigmatic, trefoil, tetrafoil, pentafoil and so forth are words that are used to describe aberration with particular symmetries (spherical aberration is rotationally symmetric about the chief ray, coma flips sign on a $180^o$ rotation about the chief ray, trefoil flips sign on a $120^o$ rotation and so forth). There is also chromatic aberration, where the image point position depends on wavelength so that point sources with a spectral spread have blurred images. Lastly, the imaging surface, comprising the points of "least confusion" (i.e. those best approximating where the rays converge to a point) is always curved to some degree - it is often well approximated by an ellipsoid - and so even if convergence to points is perfect, the focal surface will not line up with a flat CCD array. This is know as lack of flatness of field: microscope objectives with particularly flat imaging surfaces bear the word "Plan" (so you have "Plan Achromat", "Plan Apochromat" and so forth).

Only very special systems allow for convergence of all ray bundles diverging from points in the object surface to precise points in the image surface. Two famous examples are the Maxwell Fisheye Lens and the Aplanatic Sphere: both of these are described in the section called "Perfect Imaging Systems" in Born and Wolf, "Principles of Optics". They are also only perfect at one wavelength.

An equivalent condition for convergence to a point is that the total optical path - the optical Lgagrangian is the same for all rays passing between the points.

Generally, lens systems are designed so that perfect imaging as you describe happens on the optical axis. The ray convergence at all other points is only approximate, although it can be made good enough that diffraction effects outweigh the non-convergence.

And of course, finally, if everything else is perfect, there is the diffraction limitation described by Eoin. The diffraction limit simply arises because converging plane waves with wavenumber $k$ cannot encode any object variation that varies at a spatial frequency greater than $k$ radians per second. This, if you like, is the greatest spatial frequency Fourier component that one has to build an image out of. Images more wiggly than this Fourier component of maximum wiggliness cannot form. A uniform amplitude, aberration-free spherical wave converges to an Airy disk, which is often taken as defining the "minimum resolvable diffraction limited distance". However, this minimum distance is a bit more complicated than that. It is ultimately defined by the signal to noise as well, so an extremely clean optical signal can see features a little bit smaller than the so-called diffraction limit, but most systems, even if their optics is diffraction limited, are further limited by noise to somewhat less than the "diffraction limit".


A perfect optical system has to accept a ray coming in at any angle and bend it to exactly the correct point on the image plane. Unfortunately it isn't possible to make a lens which can do this for all possible rays because the shape would have to be different for rays of different angles hitting the same part of the lens (or mirror).

A pinhole does this by limiting the rays to a very small range of ray angles and having a curved image plane. Systems of lenses approximate the ideal behaviour by adding more and more optical surfaces of different types of curvature - each to correct a specific range of problems. But (at least with a finite number of components) you can please all the rays.

Then there are other technical differences such as chromatic aberration (different wavelengths bend different amounts in the same glass)


I'm not sure previous posts answer your final question. I want to give you an intuitive picture of the situation. Provided the detail pointed by @WetSavannaAnimal that the "true" mapping is

{object plane point} to {image plane point}

when talking of conventional optical systems, and not ray to point, as always many rays passing through your lens ends in the same point (you may think then in

{every ray coming from the same object point} to {image plane point}),

you can understand blurriness causes with a ray diagram. I will also neglect diffraction, which reduces sharpness in your pinhole camera.

If your system is a single lens you may think of light as a cone:

enter image description here

If your image is forming in B and you put there a screen, a film, a sensor... you get your point-point mapping. If you put it in A or C you get a point-disk mapping, giving overlapping disks when your object have more than one point... These disks are usually gaussian or Airy-like, due to diffraction. As you point with the pinhole limit, finite aperture (not collecting the $2\pi \text{ srad}$ of light but a cone) is the key. If now we reduce the aperture with a diaphragm:

enter image description here

as we do in photography to enlarge Depth of Field, i.e., get every plane "focused", or get a picture focused wherever the plate is, as in the pinhole camera. In this limit, you can also understand how the lens does not matter at all, as it is locally flat in its center. We usually work in terms of transfer functions of black boxes to connect object and screen planes.

So if you need glasses, you can make some cheap ones with some aluminum foil and a pin. That arrangement allows a wide set of interesting phenomena observation... (like watching floaters, your own fundus, test for dust in your camera's lenses or sensors, etc.)

Tags:

Lenses

Optics