Swift Accelerate and vImage: Getting Started

Learn how to process images using Accelerate and vImage in a SwiftUI application. By Bill Morefield.

Leave a rating/review
Download materials
Save for later
Share
You are currently viewing page 3 of 4 of this article. Click here to view the first page.

Hooking it up to the UI

This pattern holds for most vImage processing routines. You do the setup to create appropriate source and destination buffers, call the function for the desired image processing with a suffix stating the data format and then check for errors and process the destination buffer as necessary.

It’s time to see this in action. Open ContentView.swift. Between the two ImageViews, add the following code:

HStack {
  Button("Equalize Histogram") {
    var imageWrapper = VImageWrapper(uiImage: originalImage)
    imageWrapper.equalizeHistogram()
    processedImage = imageWrapper.processedImage
  }
}
.disabled(originalImage.cgImage == nil)

You’ve added a button that will be active only after an image containing valid CGImage loads. When tapped, the button calls equalizeHistogram() and places the result into the view’s processedImage property.

Build and run. Select a photo and tap the Equalize Histogram button. You’ll notice the dramatic change.

Green leaves with equalize histogram implemented

That’s a neat transformation, but it’s now time to look at another transformation.

Implementing Image Reflection

The steps that you followed for histogram equalization will work for almost any vImage image processing function. Now, you’ll add similar code to implement horizontal image reflection.

Open VImageWrapper.swift and add the following method to the struct:

mutating func reflectImage() {
  guard
    let image = uiImage.cgImage,
    var imageBuffer = createVImage(image: uiImage),
    var destinationBuffer = try? vImage_Buffer(
      width: image.width,
      height: image.height,
      bitsPerPixel: UInt32(image.bitsPerPixel))
  else {
    print("Error creating image buffers.")
    processedImage = nil
    return
  }
  defer {
    imageBuffer.free()
    destinationBuffer.free()
  }

  let error = vImageHorizontalReflect_ARGB8888(
    &imageBuffer,
    &destinationBuffer,
    vNoFlags)

  guard error == kvImageNoError else {
    printVImageError(error: error)
    processedImage = nil
    return
  }

  processedImage = convertToUIImage(buffer: destinationBuffer)
}

The only difference between this method and equalizeHistogram() is that it calls vImageHorizontalReflect_ARGB8888(_:_:_:) instead of vImageEqualization_ARGB8888(_:_:_:).

Open ContentView.swift and add the following code after your Equalize Histogram button at the end of the HStack:

Spacer()
Button("Reflect") {
  var imageWrapper = VImageWrapper(uiImage: originalImage)
  imageWrapper.reflectImage()
  processedImage = imageWrapper.processedImage
}

Build and run. Select a photo and tap the new Reflect button. You’ll see you have implemented further image manipulation with only a small change.

Original waterfall and reflected waterfall

Now that you have a basic grasp of using vImage, you’ll explore a more complex task that will require you to delve into Objective-C patterns.

Histograms

An image histogram represents the distribution of tonal values in an image. It divides an image’s tones into bins displayed along the horizontal axis. The height of the histogram at each spot represents the number of pixels with that tone. At a glance, it provides an understanding of the overall exposure and balance of exposure in a photo. In image processing, the histogram can help with edge detection and segmentation tasks.

In this section, you’ll see how to get the histogram data for an image. In the process, you’ll learn more complex interactions with the vImage library, including working with patterns still built upon Objective-C.

Getting the Histogram

Open VImageWrapper.swift and add the following code before the VImageWrapper struct:

enum WrappedImage {
  case original
  case processed
}

You’ll use this enum type to distinguish between the original or processed image. Now, add the following code to the end of the VImageWrapper struct:

func getHistogram(_ image: WrappedImage) -> HistogramLevels? {
  guard
    // 1
    let cgImage =
      image == .original ? uiImage.cgImage : processedImage?.cgImage,
    // 2
    var imageBuffer = try? vImage_Buffer(cgImage: cgImage)
  else {
    return nil
  }
  // 3
  defer {
    imageBuffer.free()
  }
}

Nothing new here:

  1. You use the value of the WrappedImage enum to select either the original image or the processed image.
  2. Then, you create a vImage_Buffer for the image.
  3. Again you use defer to call free() when exiting the scope.

Now, add the following code at the end of the getHistogram(_:):

var redArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var greenArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var blueArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var alphaArray: [vImagePixelCount] = Array(repeating: 0, count: 256)

The histogram’s raw contents provide the number of pixels for the bin in the histogram, a value of type vImagePixelCount. So you create four 256-element arrays — one array for each color channel along with the alpha channel. Having 256 elements means an array with 256 bins for the histogram, the number of distinct values the eight bits of data for each channel of the ARGB8888 format holds.

Working with Pointers

Now, you come to pointers, something you probably hoped you’d avoided using Swift! Add the following code after the array definitions you just added:

// 1
var error: vImage_Error = kvImageNoError
// 2
redArray.withUnsafeMutableBufferPointer { rPointer in
  greenArray.withUnsafeMutableBufferPointer { gPointer in
    blueArray.withUnsafeMutableBufferPointer { bPointer in
      alphaArray.withUnsafeMutableBufferPointer { aPointer in
        // 3
        var histogram = [
          rPointer.baseAddress, gPointer.baseAddress,
          bPointer.baseAddress, aPointer.baseAddress
        ]
        // 4
        histogram.withUnsafeMutableBufferPointer { hPointer in
          // 5
          if let hBaseAddress = hPointer.baseAddress {
            error = vImageHistogramCalculation_ARGB8888(
              &imageBuffer,
              hBaseAddress,
              vNoFlags
            )
          }
        }
      }
    }
  }
}

The Objective-C roots of the Accelerate framework show through here, despite the work to make the libraries more Swift-friendly. The function to calculate a histogram expects a pointer to an array that contains four more pointers. Each of these four pointers will point to an array that will receive the counts for one channel. Almost all of this code changes a set of Swift arrays to this Objective-C pattern.

Here’s what the code does:

  1. Working with pointers in Swift becomes more comfortable when you define them inside closures. Because you want to set the error parameter several blocks deep and still use it outside the block, you define it before starting the closures.
  2. Swift provides several ways to access pointers for legacy needs such as this. withUnsafeMutableBufferPointer(_:) creates a pointer you can access within the method’s closure. You make one for each channel array. The pointers are only valid inside the closure passed to withUnsafeMutableBufferPointer(_:). This is why you must nest these calls.
  3. You create a new array whose elements are these four pointers. Note the order of the arrays here. The baseAddress property gets a pointer to the first element of a buffer — in this case, your array for each channel. At this point, you’ve built the structure that vImage expects for the histogram function call.
  4. You need the pointer to the array you just created as you did with the channel arrays and work with it inside the block.
  5. You unwrap the pointer to the first element of the histogram array and then call the vImageHistogramCalculation_ARGB8888 passing the image buffer, the unwrapped pointer and that there are no special instructions again.