Creating an iOS 9 Photo Editing Extension

PreviousTable of ContentsNext
An iOS 9 Today Extension Widget TutorialCreating an iOS 9 Action Extension


Learn SwiftUI and take your iOS Development to the Next Level
SwiftUI Essentials – iOS 16 Edition book is now available in Print ($39.99) and eBook ($29.99) editions. Learn more...

Buy Print Preview Book


The primary purpose of the iOS 9 Photo Editing extension is to allow the photo editing capabilities of an application to be made available from within the standard iOS Photos application. Consider, for example, a scenario where a developer has published an app that allows users to apply custom changes and special effects to videos or photos. Prior to the introduction of extensions, the only way for a user to access these capabilities would have been to launch and work within that application. By placing some of the functionality of the application into a Photo Editing extension, the user is now able to select videos or photos from within the Photos app and choose the extension from a range of editing options available on the device. Once selected, the user interface for the extension is displayed to the user so that changes can be made to the chosen image or video. Once the user has finished making the changes and exits the extension, the modified image or video is returned to the Photos app.

Creating a Photo Editing Extension

As with all extension types, by far the easiest starting point when creating a Photo Editing extension is to use the template provided by Xcode. For the purposes of this chapter, create a new Xcode project named PhotoDemo using the Single View Application template, the Swift programing language and with the Devices menu set to Universal.

Once the application project has been created, a new target will need to be added for the Photo Editing extension. To achieve this, select the File -> New -> Target… menu option and, in the resulting panel, select the Application Extension option listed under iOS in the left hand column and Photo Editing Extension from the main panel as shown in Figure 86-1:


Xcode 7 photo editing extension.png

Figure 86-1


With the appropriate options selected, click on the Next button and enter MyPhotoExt into the Product Name field. Leave the remaining fields set to the default values and click on Finish to complete the extension creation process. When prompted, click on the Activate button to activate the scheme created by Xcode to enable the extension to be built and run.

Once the extension has been added, it will appear in the project navigator panel under the MyPhotoExt folder. This folder will contain both the Swift source code file for the extension’s view controller named PhotoEditingViewController.swift and the corresponding user interface storyboard file named MainInterface.storyboard. In addition, an Info.plist file will be present in the sub-folder.

Accessing the Photo Editing Extension

Before beginning work on implementing the functionality of the extension, it is important to learn how to access such an extension from within the iOS Photos app. Begin by verifying that the MyPhotoExt build scheme is selected in the Xcode toolbar as illustrated in Figure 86-2.


Xcode 6 select photo extension scheme.png

Figure 86-2


If the extension is not currently selected, click on the current scheme name and select MyPhotoExt from the drop down menu. Also make sure that a physical iOS device is connected to the development system and selected within Xcode as the destination for the running application.

Having verified that the appropriate scheme is selected, click on the toolbar run button. Since this is an extension, it can only be run within the context of a host application. As a result, Xcode will display a panel listing the applications installed on the attached device. From this list of available applications (Figure 86 3), select the Photos app and click on the Run button.


Xcode 6 choose photo ext host app.png

Figure 86-3


After the extension and containing application have been compiled and installed on the device, the Photos app will automatically launch. If it does not, launch it manually from the device screen. Once the Photos app appears, select a photo from those stored on the device and, once selected, tap on the Edit button located in the top right hand corner of the screen to enter the standard editing interface of the Photos app. Within the tab bar along the bottom of the Photos editing tool is a small round button containing three dots (as highlighted in Figure 86-4):


Ios 9 photo app menu button.png

Figure 86-4


Tapping this button will display the action panel (as shown in Figure 86 5) where Photo Editing extensions may be chosen and used to edit videos and images.


Ios 9 select photo ext.png

Figure 86-5


Assuming that the extension for our PhotoDemo application is displayed, select it and wait for the extension to launch. Once the extension has loaded it will appear in the form of the user interface as defined in the MyPhotoExt -> MainInterface.storyboard file.


Configuring the Info.plist File

A Photo Editing extension must declare the type of media it is able to edit. This is specified via the PHSupportedMediaTypes key within the NSExtension section of the extension’s Info.plist file. By default, the Photo Editing template declares that the extension is capable of editing only images as follows:

<plist version="1.0">
.
.
.
       <key>NSExtension</key>
        <dict>
                <key>NSExtensionAttributes</key>
                <dict>
                        <key>PHSupportedMediaTypes</key>
                        <array>
                                <string>Image</string>
                        </array>
                </dict>
                <key>NSExtensionMainStoryboard</key>
                <string>MainInterface</string>
                <key>NSExtensionPointIdentifier</key>
                <string>com.apple.photo-editing</string>
        </dict>
</dict>
</plist>

If the extension is also able to edit video files, the PHSupportedMediaTypes entry within the file would be modified as follows:

<key>PHSupportedMediaTypes</key>
      <array>
          <string>Video</string>
          <string>Image</string>
      </array>

For the purposes of this example, leave the Info.plist file unchanged with support for images only.

Designing the User Interface

The user interface for the extension is going to consist of an Image View and a Toolbar containing three Bar Button Items. Within the Xcode project navigator panel, locate and load the MyPhotoExt -> MainInterface.storyboard file into Interface Builder and select and delete the “Hello World” Label view. With a clean canvas, design and configure the layout so that it matches that of Figure 86-6:


Ios 8 photo extension ui.png

Figure 86-6


Select the Image View, display the Attributes Inspector panel and change the Mode setting to Aspect Fit.

With the Image View still selected, display the Auto Layout Pin menu and set Spacing to nearest neighbor constraints on all four sides of the view with the Constrain to margins option switched off. Click to select the Toolbar view and use the Auto Layout Pin menu once again to apply Spacing to nearest neighbor constraints on the left, right and bottom edges of the view with the Constrain to margins option still switched off. Before adding the constraints, also enable the Height constraint option using the currently displayed value.

Display the Assistant Editor and verify that it is displaying the source code for the PhotoEditingViewController.swift file. Select the Bar Button Item displaying the “Sepia” text (note that it may be necessary to click twice since the first click will select the parent Toolbar view). With the item selected, Ctrl-click on the item and drag the resulting line to a position immediately beneath the end of the implementation of the viewDidLoad method in the Assistant Editor panel. Release the line and, in the connection dialog, establish an Action named sepiaSelected. Repeat these steps for the “Mono” and “Invert” Bar Button Items, naming the Actions monoSelected and invertSelected respectively.

Finally, Ctrl-click on the Image View and drag the resulting line to a position beneath the “class PhotoEditingViewController” declaration. Release the line and establish an Outlet for the Image View named imageView.

Learn SwiftUI and take your iOS Development to the Next Level
SwiftUI Essentials – iOS 16 Edition book is now available in Print ($39.99) and eBook ($29.99) editions. Learn more...

Buy Print Preview Book

The PHContentEditingController Protocol

When Xcode created the template for the Photo Editing extension it created a View Controller class named PhotoEditingViewController and declared it as implementing the PHContentEditingController protocol. It also generated stub methods for each of the methods that must be implemented in order for the class to conform with the protocol. The remainder of implementing a Photo Editing extension primarily consists of writing the code for these methods to implement the required editing behavior. One of the first methods that will need to be implemented relates to the issue of adjustment data.

Photo Extensions and Adjustment Data

When a Photo Extension is selected by the user, a method named canHandleAdjustmentData is called on the view controller class of the extension. The method must return a true or false value depending on whether or not the extension supports adjustment data.

If an extension supports adjustment data, it is passed a copy of the original image or video together with a set of data outlining any earlier modifications made to the media during previous editing sessions. The extension then re-applies those changes to the file or video to get it back to the point where it was at the end of the last editing session. The advantage of this approach is that the extension is able to offer the user the ability to undo any previous editing operations performed within previous sessions using the extension. When editing is completed, the extension returns the modified image or video file, together with any new adjustment data reflecting edits that were performed during the current session.

If an image editing extension indicates that it does not support adjustment data, it is passed a copy of the modified image as it appeared at the end of the last editing session. This enables the user to perform additional editing tasks but does not allow previous edits to be undone. In the case of video editing extensions that do not support adjustment data, the extension will be passed the original video and previous edits will be lost. Clearly, therefore, supporting adjustment data is an important requirement for video editing.

Whilst the example contained within this tutorial will store and return adjustment data to the Photos app allowing for future improvements to the extension it will not handle incoming adjustment data. Within the PhotoEditingViewController.swift file, therefore, locate and review the canHandleAdjustmentData method and verify that it is configured to return a false value:

func canHandleAdjustmentData(adjustmentData: PHAdjustmentData?) -> Bool {
        return false
}

Receiving the Content

The next method that will be called on the extension View Controller class is startContentEditingWithInput.

This method is passed as arguments a PHContentEditingInput object and a placeholder image. For images, this object contains a compressed version of the image suitable for displaying to the user, a URL referencing the location of the full size image, information about the orientation of the image and, in the case of extensions with adjustment data support, a set of adjustment data from previous edits.

As previously discussed, image extensions with adjustment data support implemented are passed the original image and a set of adjustments to be made to reach parity with the latest state of editing. Since it can take time to render these changes, the placeholder argument contains a snapshot of the image as it currently appears. This can be displayed to the user while the adjustment data is applied and the image rendered in the background.

For this example, the startContentEditingWithInput method will be implemented as follows:

import UIKit
import Photos
import PhotosUI

class PhotoEditingViewController: UIViewController, PHContentEditingController {

    @IBOutlet weak var imageView: UIImageView!

    var input: PHContentEditingInput?
    var displayedImage: UIImage?
    var imageOrientation: Int32?
.
.
.
    func startContentEditingWithInput(contentEditingInput:
         PHContentEditingInput?, placeholderImage: UIImage) {

        input = contentEditingInput

        if input != nil {
            displayedImage = input!.displaySizeImage
            imageOrientation = input!.fullSizeImageOrientation
            imageView.image = displayedImage
        }
    }
.
.
.
}

The above changes declare two optional variables to contain a reference to the display sized image and the image orientation. The code in the method then assigns the display sized image from the PHContentEditingInput object passed to the method to the displayedImage variable and also stores the orientation setting in the imageOrientation variable. Finally, the display sized image is displayed on the Image View in the user interface so that it is visible to the user.

Compile and run the extension, selecting the Photos app as the host application, and verify that the extension displays a copy of the image in the Image View of the extension View Controller.

Implementing the Filter Actions

The actions connected to the Bar Button Items will change the image by applying Core Image sepia, monochrome and invert filters. Until the user commits the edits made in the extension, any filtering will be performed only on the display sized image to avoid the rendering delays that are likely to be incurred working on the full sized image. Having performed the filter, the modified image will be displayed on the image view instance.

Remaining within the PhotoEditingViewController.swift file, implement the three action methods as follows:

class PhotoEditingViewController: UIViewController, PHContentEditingController {

    @IBOutlet weak var imageView: UIImageView!

    var input: PHContentEditingInput?
    var displayedImage: UIImage?
    var imageOrientation: Int32?
    var currentFilter = "CIColorInvert"

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view.
    }

    @IBAction func sepiaSelected(sender: AnyObject) {
        currentFilter = "CISepiaTone"

        if displayedImage != nil {
            imageView.image = performFilter(displayedImage!, 
					orientation: nil)
        }
    }

    @IBAction func monoSelected(sender: AnyObject) {
        currentFilter = "CIPhotoEffectMono"

        if displayedImage != nil {
            imageView.image = performFilter(displayedImage!, 
					orientation: nil)
        }
    }

    @IBAction func invertSelected(sender: AnyObject) {
        currentFilter = "CIColorInvert"

        if displayedImage != nil {
            imageView.image = performFilter(displayedImage!, 
					orientation: nil)
        }
    }
.
.
.
}

In each case, a method named performFilter is called to perform the image filtering task. The next step, clearly, is to implement this method using the techniques outline in the chapter entitled An iOS 9 Graphics Tutorial using Core Graphics and Core Image:

func performFilter(inputImage: UIImage, orientation: Int32?)
                    -> UIImage?
{
    var resultImage: UIImage?
    var cimage: CIImage
    cimage = CIImage(image: inputImage)!

    if orientation != nil {
        cimage = cimage.imageByApplyingOrientation(orientation!)
    }

    if let filter = CIFilter(name: currentFilter) {
        filter.setDefaults()
        filter.setValue(cimage, forKey: "inputImage")

        switch currentFilter {

            case "CISepiaTone", "CIEdges":
                filter.setValue(0.8, forKey: "inputIntensity")

            case "CIMotionBlur":
                filter.setValue(25.00, forKey:"inputRadius")
                filter.setValue(0.00, forKey:"inputAngle")

            default:
                break
        }
        let ciFilteredImage = filter.outputImage
        let context = CIContext(options: nil)
        let cgImage = context.createCGImage(ciFilteredImage!,
                    fromRect: ciFilteredImage!.extent)
        resultImage = UIImage(CGImage: cgImage)
    }
    return resultImage
}

Learn SwiftUI and take your iOS Development to the Next Level
SwiftUI Essentials – iOS 16 Edition book is now available in Print ($39.99) and eBook ($29.99) editions. Learn more...

Buy Print Preview Book

The above method takes the image passed through as a parameter, takes steps to maintain the original orientation and performs an appropriately configured filter operation on the image based on the current value assigned to the currentFilter variable. The filtered image is then returned to the calling method.

Compile and run the extension once again, this time using the filter buttons to change the appearance of the displayed image.

Returning the Image to the Photos App

When the user has finished making changes to the image and touches the Done button located in the extension toolbar, the finishContentEditingWithCompletionHandler method of the View Controller is called. This is passed a reference to a completion handler which must be called once the image has been rendered and is ready to be returned to the Photos app.

Before calling the completion handler, however, this method performs the following tasks:

1. Obtains a copy of the full size version of the image.

2. Ensures that the original orientation of the image is preserved through the rendering process.

3. Applies to the full sized image all of the editing operations previously performed on the display sized image.

4. Renders the new version of the full sized image.

5. Packages up the adjustment data outlining the edits performed during the session.

Since the above tasks (particularly the rendering phase) are likely to take time, these must be performed within a separate asynchronous thread. The code to complete this example extension can now be implemented within the template stub of the method as follows:

func finishContentEditingWithCompletionHandler(completionHandler: 
		((PHContentEditingOutput!) -> Void)!) {

    dispatch_async(dispatch_get_global_queue
        (CLong(DISPATCH_QUEUE_PRIORITY_DEFAULT), 0)) {

        let output = 
          PHContentEditingOutput(contentEditingInput: self.input)

        let url = self.input?.fullSizeImageURL

        if let imageUrl = url {
            let fullImage = UIImage(contentsOfFile:
                imageUrl.path!)

            let resultImage = self.performFilter(fullImage!,
                orientation: self.imageOrientation)

            if let renderedJPEGData =
                UIImageJPEGRepresentation(resultImage!, 0.9) {
                renderedJPEGData.writeToURL(
                output.renderedContentURL,
                        atomically: true)
            }
            let archivedData =
            NSKeyedArchiver.archivedDataWithRootObject(
                self.currentFilter)

            let adjustmentData =
            PHAdjustmentData(formatIdentifier:
                        "com.ebookfrenzy.photoext",
                formatVersion: "1.0",
                        data: archivedData)

            output.adjustmentData = adjustmentData
        }
        completionHandler?(output)
    }
}

The code begins by creating a new instance of the PHContentEditingOutput class, initialized with the content of the input object originally passed into the extension:

let output = 
          PHContentEditingOutput(contentEditingInput: self.input)

Next, the URL of the full sized version of the image is extracted from the original input object and the corresponding image loaded into a UIImage instance. The full sized image is then filtered via a call to the performFilter method:

let url = self.input?.fullSizeImageURL

if let imageUrl = url {
        let fullImage = UIImage(contentsOfFile: imageUrl.path!)

var resultImage = self.performFilter(fullImage!,
                    orientation: self.imageOrientation)

Learn SwiftUI and take your iOS Development to the Next Level
SwiftUI Essentials – iOS 16 Edition book is now available in Print ($39.99) and eBook ($29.99) editions. Learn more...

Buy Print Preview Book

With the editing operations now applied to the full sized image, it is rendered into JPEG format and written out to a location specified by the URL assigned to the renderedContentURL property of the previously created

PHContentEditingOutput instance:
if let renderedJPEGData =
        UIImageJPEGRepresentation(resultImage!, 0.9) {
        renderedJPEGData.writeToURL(
        output.renderedContentURL,
                atomically: true)
}

Although the extension had previously indicated that it was not able to accept adjustment data, returning adjustment data reflecting the edits performed on the image to the Photos app is mandatory. For this tutorial, the name of the Core Image filter used to modify the image is archived into an NSData instance together with a revision number and a unique identifier. This object is then packaged into a PHAdjustmentData instance and assigned to the adjustmentData property of the output object:

let archivedData = 
	NSKeyedArchiver.archivedDataWithRootObject(
					self.currentFilter)

let adjustmentData = PHAdjustmentData(formatIdentifier: 
	"com.ebookfrenzy.photoext", formatVersion: "1.0", data: 
		archivedData)

output.adjustmentData = adjustmentData

If the extension were to be enhanced to handle adjustment data, code would need to be added to the canHandleAdjustmentData method to compare the formatVersion and formatIdentifier values from the incoming adjustment data with those specified in the outgoing data to verify that the data is compatible with the editing capabilities of the extension.

Finally, the completion handler is called and passed the fully configured output object. At this point, control will return to the Photos app and the modified image will appear in the Photos editing screen.

Testing the Application

Build and run the extension using the Photos app as the host and take the, by now familiar, steps to select an image and invoke the newly created Photo Editing extension. Use a toolbar button to change the appearance of the image before tapping the Done button. The modified image will subsequently appear within the Photos app editing screen (Figure 86-7 shows the results of the invert filter) where the changes can be committed or discarded:


Ios 9 photo ext running.png

Figure 86-7

Summary

The Photo Editing extension allows the image editing capabilities of a containing app to be accessed from within the standard iOS Photos app. A Photo Editing extension takes the form of a view controller which implements the PHContentEditingController protocol and the protocol’s associated delegate methods.


Learn SwiftUI and take your iOS Development to the Next Level
SwiftUI Essentials – iOS 16 Edition book is now available in Print ($39.99) and eBook ($29.99) editions. Learn more...

Buy Print Preview Book



PreviousTable of ContentsNext
An iOS 9 Today Extension Widget TutorialCreating an iOS 9 Action Extension