How is it possible there are UV photos while our eyes cannot detect UV waves?

The images are taken by UV/IR cameras. But the frequencies are mapped down/up to visible region using some scheme. If you want to preserve the ratios of frequencies you do a linear scaling. Generally the scaling is done in way to strike balance between aesthetics and informativity.

In light of the overwhelming attention to this question, I have decided to add the recent image of a black hole as taken by the Event Horizon Telescope. This image was captured in radio waves by array of radio telescopes in eight different places on earth. And the data was combined and represented in the following way.

enter image description here

A point that I forgot to mention which was pointed out by @thegreatemu in the comments below is that the black hole image data was all collected at just one radio wavelength (of $1.3$mm). The colour in this image signifies the intensity of the radio wave. Brighter implying brighter signal.


Because you can build a camera that can.

The sensitivity of a camera is not determined by human eyes, but by the construction of the camera's sensor. Given that in most common applications we want the camera to capture something that mimics what our eyes see, we generally build them to likewise be sensitive to approximately the same frequencies of light that our eyes are sensitive to.

However, there's nothing to prevent you from building a camera that is sensitive to a different frequency spectrum, and so we do. Not only ultraviolet, but infrared, X-rays, and more are all possible targets.

If you're asking why the pictures are visible, well that's because we need to see them with our eyes, so we paint them using visible pigments or display them on displays that emit visible light. However, this doesn't make the pictures "wrong" - at the basic level, both visible light and UV/IR pictures taken by modern cameras are the same thing: long strings of binary bits, not "colors". They take interpretation to make them useful to us.

Typically, UV/IR cameras take greyscale images, because there are no sensible "colors" to assign the different frequencies - or better, "color" is just a made up thing that comes from our brains, and is not a property of light. So coloring all "invisible" lights grey is not "wrong" any more than anything else - and it's easier to build the sensors (which means "cheaper"), because the way you make a color-discriminating camera is effectively the same as the way your eyes are made: you have sub-pixelar elements that are sensitive to different frequency ranges.


when you are looking at a UV or IR photo, the intensities of these (invisible) rays are represented by different (visible) colors and brightnesses in the photo. This technique of rendering things our eyes cannot see into images that we can see is common, and the images thus prepared are called false color images.