Decoding images in Swift with Image I/O

While working on the SwiftUI package for downloading and displaying images I run into the need of implementing image decoding. I used Image I/O and WebKit source code to implement a custom image decoder in Swift.

Why would you need to use a custom image decoder? For me there were two problems to solve: incrementally load an image and displaying animated images.

Loading an image incrementally allows displaying partial image while loading is in progress. Not all image formats support this. An image must be encoded in interlaced format.

Displaying animated images is another reason to use a custom image decoder. UIKit supports animated formats since forever, but UIImage init?(data: Data) initializer creates a static image.

Beyond init with data

When initialized with data UIImage creates CGImageSource under the hood, but uses only the first frame of an image.

The Image I/O framework programming guide explains how to incrementally load an image:

  • Create the data object for accumulating the image data;
  • Create an incremental image source by calling the function CGImageSourceCreateIncremental;
  • Add image data to the data object and call the function CGImageSourceUpdateData to update the incremental image source;
  • Create an image by calling CGImageSourceCreateImageAtIndex;
  • Check to see if you have all the data for an image by calling the function CGImageSourceGetStatusAtIndex. Repeat steps until it is.

Pretty straightforward, but no sample code and the guide only explains how to encode an animated image. There are some examples online, but what can be better inspiration than how WebKit implements image decoding. So let’s dive into it.

WebKit image decoder in Swift

WebKit uses WebCore library for layout and rendering. This is where image decoding code is. WebCore is a cross-platform library build in C++. Base ImageDecoder class declares a number of virtual functions and implements a factory function to create concrete instance for platform. WebCore uses Core Graphics framework for macOS and iOS and implementation is in the ImageDecoderCG class.

WebCore implementation closely follows steps in the Image I/O programming guide. The SharedBuffer class accumulates the image data. For our implementation we can use standard Data object.

ImageDecoder manages the incremental image source. Before creating the image source WebCore attempts to get UTI hint. This is done using private CGImageSourceGetTypeWithData function. So we’ll skip this.

Updating the image source with data is simple. We can keep track if all data was downloaded to prevent update the image source after it was.

Creating an image from the image source is more complex routine. The image source can include multiple images. WebCore treat multiple images as animation frames and so we do:

To get properties of an individual frame we can use CGImageSourceCopyPropertiesAtIndex function. Animation properties, like frame duration, stored differently for GIF, APNG, and HEICS image formats:

Frame duration is an interesting property. WebCore won’t allow durations less than 11ms to prevent some ads from flashing. I decided to follow this logic.

We also need the size of a frame. This is simply two dimensions in pixels:

You may notice options we pass to Image I/O functions. ImageDecoder uses two sets of options. The first one asks an image to be cached in a decoded form. The second set used to decode an image asynchronously. Images are decoded immediately and from the full pixel size. This is more demanding but allows to shift decoding from the main thread.

We also can specify subsampling level for Image I/O to perform downsampling. I choose to use enum for it:

Decoding options are used to tell ImageDecoder if we’re running synchronously or asynchronously and expected image size:

Now we finally ready to create an image. We do it by frame:

We also should implement status check to see if the image is complete. WebCore also handles some pitfalls there:

ImageDecoder would not be complete without integration with UIKit:

Using ImageDecoder

Use this approach if you read the image data from a file or downloaded from network using URLSessionDataTask:

When you incrementally loading an image create the image data object for accumulating the image data. Pass the partial image data to ImageDecoder and the complete image data when loading completes. This example uses URLSessionDelegate:

To create a static image refer to the first frame:

Animated images consist of multiple frames:

For convenience you can use the extension for integration with UIKit from above:

SwiftUI doesn’t support animated images but you can use UIImageView like this:


By default Image I/O won’t load image in memory at creation time. Displaying images will still take time from the main thread. We can tell Image I/O to immediately load images by setting kCGImageSourceShouldCacheImmediately option.

ImageDecoder provides two options via DecodingOptions.Mode:

  • .asynchronous option will set kCGImageSourceShouldCacheImmediately. This is the default option. Use it when decoding images in a background queue.
  • .synchronous won’t set the option above.

Additional benefit of using WebKit source code is access to its test images so ImageDecoder can be properly tested. I created a simple app and verified that images that can be decoded by UIImage also work with my decoder.

ImageDecoder is open source and you can find it on github: You can install it with Swift Package Manager.

Also here are Image I/O programming guide, WebKit source, and ImageDecoderCG.cpp file.


iOS Developer, here to share best practices learned through my experience. You can find me on Twitter:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store