Images vs. Core Graphics

Images vs Core Graphics is a blunt distinction. The methods to render offscreen/onscreen graphics are more complex to the point that you need to use Instruments to find out what is really happening. I tried to provide an overview here, but this answer could use some improving from more knowledgeable people.

GPU vs CPU rendering

Graphics are always rendered onscreen by the GPU. However, they can be generated by the GPU or the CPU, and happen in user code or in a separate process called “the render server”. Here is an overview:

CPU, user code:

  • Core Graphics and Core Text
  • drawRect(). The result is usually cached.

GPU, render server:

  • CALayer with a shouldRasterize set to YES. This creates a cache of the layer and sublayers.

GPU, render server, very slow:

  • CALayer using masks (setMasksToBounds) and dynamic shadows (setShadow*).
  • Group opacity (UIViewGroupOpacity).

GPU, fast:

  • Creating images from PNG files. The stretching in stretchable images is GPU only too.

Note that caching is only useful if the cache is reused. If it is immediately discarded it hurts performance. For example, a cached animation where contents are simply stretched can be cached and reused, but a cached animation where contents change will have an awful performance.

Bitmaps vs drawing

Image files are generally faster.

  • Image files can be downloaded to disk in advance using aggressive caching.
  • Images can be read and decompressed from disk on the background.
  • Images can be cached in memory if you use imageNamed: instead initWithData:.

Offscreen drawing requires more work, but lets you achieve more.

  • You can animate complex graphics with no quality loss, because the graphic is rewritten on every frame.
  • You can create graphics with i18n on the fly.
  • You should disable the implicit Core Graphics animation if you don't need it. Example: UIView with round corners (you just need the rounding, not the animation).
  • Drawing can be complex enough that you need to use Instruments to see where the time is going.
  • Drawing with drawRect is probably cached unless you use masking, shadows, edge antialiasing, or group opacity. You can request caching calling -[CALayer setShouldRasterize:YES] and -[CALayer setRasterizationScale:].

Stretchable images, whether read from image files, or generated by drawing, use less memory. Stretching is an unexpensive operation to the GPU.


Performance is only a problem if there isn't enough. Use whatever is faster to code unless pressed otherwise. The fastest program is the one that reaches the market first.

Some interesting reading:

  • Designing for iOS: Graphics & Performance. Don't miss the comments from Andy Matuschak. I edited the original answer with a lot of content from this article.
  • WWDC 2011 > Session 121: Understanding UIKit Rendering
  • WWDC 2012 > Essentials > Session 238: iOS App Performance: Graphics and Animations
  • WWDC 2012 > Essentials > Session 211: Building Concurrent User Interfaces on iOS
  • WWDC 2012 > Essentials > Session 223: Enhancing User Experience with Scroll Views
  • WWDC 2012 > Essentials > Session 240: Polishing Your Interface Rotations.

In my experience it's always better to use images from a performance point of view, but sometimes you need to draw things manually.


I think, images are better for it's performance. But, when we want to develop something which will can be changed (eg. color change, shape change etc.), scaled or can be animated, then, we should go for core graphics.

Core graphics is amazing when we want to draw something with some mathematical logic and want to play with that! In schools, we all learn circle equation, line equation etc. in mathematics. Core graphics is the right tool to visualise our mathematical knowledge and build some more crazy things with that!

Core graphics is also very useful for simulation type applications.