Face Detection with Camera

To perform face detection on iOS, there are either CIDetector (Apple) or Mobile Vision (Google) API.

IMO, Google Mobile Vision provides better performance.

If you are interested, here is the project you can play with. (iOS 10.2, Swift 3)


After WWDC 2017, Apple introduces CoreML in iOS 11. The Vision framework makes the face detection more accurate :)

I've made a Demo Project. containing Vision v.s. CIDetector. Also, it contains face landmarks detection in real time.


There are two ways to detect faces: CIFaceDetector and AVCaptureMetadataOutput. Depending on your requirements, choose what is relevant for you.

CIFaceDetector has more features, it gives you the location of the eyes and mouth, a smile detector, etc.

On the other hand, AVCaptureMetadataOutput is computed on the frames and the detected faces are tracked and there is no extra code to be added by us. I find that, because of tracking. faces are detected more reliably in this process. The downside of this is that you will simply detect faces, no the position of the eyes or mouth. Another advantage of this method is that orientation issues are smaller as you can use videoOrientation whenever the device orientation changes and the orientation of the faces will relative to that orientation.

In my case, my application uses YUV420 as the required format so using CIDetector (which works with RGB) in real-time was not viable. Using AVCaptureMetadataOutput saved a lot of effort and performed more reliably due to continuous tracking.

Once I had the bounding box for the faces, I coded extra features, such as skin detection and applied it on the still image.

Note: When you capture a still image, the face box information is added along with the metadata so there are no sync issues.

You can also use a combination of the two to get better results.

Explore and evaluate the pros and cons as per your application.


The face rectangle is wrt image origin. So, for the screen, it may be different. Use:

for (AVMetadataFaceObject *faceFeatures in metadataObjects) {
    CGRect face = faceFeatures.bounds;
    CGRect facePreviewBounds = CGRectMake(face.origin.y * previewLayerRect.size.width,
                               face.origin.x * previewLayerRect.size.height,
                               face.size.width * previewLayerRect.size.height,
                               face.size.height * previewLayerRect.size.width);

    /* Draw rectangle facePreviewBounds on screen */
}