How to crop a UIImageView to a new UIImage in 'aspect fill' mode?

Let's divide the problem into two parts:

  1. Given the size of a UIImageView and the size of its UIImage, if the UIImageView's content mode is Aspect Fill, what is the part of the UIImage that fits into the UIImageView? We need, in effect, to crop the original image to match what the UIImageView is actually displaying.

  2. Given an arbitrary rect within the UIImageView, what part of the cropped image (derived in part 1) does it correspond to?

The first part is the interesting part, so let's try it. (The second part will then turn out to be trivial.)

Here's the original image I'll use:

https://static1.squarespace.com/static/54e8ba93e4b07c3f655b452e/t/56c2a04520c64707756f4267/1455596221531/

That image is 1000x611. Here's what it looks like scaled down (but keep in mind that I'm going to be using the original image throughout):

enter image description here

My image view, however, will be 139x182, and is set to Aspect Fill. When it displays the image, it looks like this:

enter image description here

The problem we want to solve is: what part of the original image is being displayed in my image view, if my image view is set to Aspect Fill?

Here we go. Assume that iv is the image view:

let imsize = iv.image!.size
let ivsize = iv.bounds.size

var scale : CGFloat = ivsize.width / imsize.width
if imsize.height * scale < ivsize.height {
    scale = ivsize.height / imsize.height
}

let croppedImsize = CGSize(width:ivsize.width/scale, height:ivsize.height/scale)
let croppedImrect =
    CGRect(origin: CGPoint(x: (imsize.width-croppedImsize.width)/2.0,
                           y: (imsize.height-croppedImsize.height)/2.0),
           size: croppedImsize)

So now we have solved the problem: croppedImrect is the region of the original image that is showing in the image view. Let's proceed to use our knowledge, by actually cropping the image to a new image matching what is shown in the image view:

let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
    iv.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}

The result is this image (ignore the gray border):

enter image description here

But lo and behold, that is the correct answer! I have extracted from the original image exactly the region portrayed in the interior of the image view.

So now you have all the information you need. croppedIm is the UIImage actually displayed in the clipped area of the image view. scale is the scale between the image view and that image. Therefore, you can easily solve the problem you originally proposed! Given any rectangle imposed upon the image view, in the image view's bounds coordinates, you simply apply the scale (i.e. divide all four of its attributes by scale) — and now you have the same rectangle as a portion of croppedIm.

(Observe that we didn't really need to crop the original image to get croppedIm; it was sufficient, in reality, to know how to perform that crop. The important information is the scale along with the origin of croppedImRect; given that information, you can take the rectangle imposed upon the image view, scale it, and offset it to get the desired rectangle of the original image.)


EDIT I added a little screencast just to show that my approach works as a proof of concept:

enter image description here

EDIT Also created a downloadable example project here:

https://github.com/mattneub/Programming-iOS-Book-Examples/blob/39cc800d18aa484d17c26ffcbab8bbe51c614573/bk2ch02p058cropImageView/Cropper/ViewController.swift

But note that I can't guarantee that URL will last forever, so please read the discussion above to understand the approach used.


Matt answered the question perfectly. I was creating a full-screen camera and had a need to make the final output match the full-screen preview. Offering here a compact extension of Matt's overall answer in Swift 5 for easy use by others. Recommend reading Matt's answer as it explains things very well.

extension UIImage {
    func cropToRect(rect: CGRect) -> UIImage? {
        var scale = rect.width / self.size.width
        scale = self.size.height * scale < rect.height ? rect.height/self.size.height : scale

        let croppedImsize = CGSize(width:rect.width/scale, height:rect.height/scale)
        let croppedImrect = CGRect(origin: CGPoint(x: (self.size.width-croppedImsize.width)/2.0,
                                                   y: (self.size.height-croppedImsize.height)/2.0),
                                                   size: croppedImsize)
        UIGraphicsBeginImageContextWithOptions(croppedImsize, true, 0)
        self.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
        let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        return croppedImage
    }
}