Yesterday I presented a talk at iOS Dev Camp DC about, what I call, “Metal with Training Wheels.” Apple has introduced a lot of abstracted frameworks on top of Metal to allow you to take advantage of Metal without having to set up the entire pipeline for bit manipulation and memory allocation.

Along with Apple’s build it, abstracted frameworks, I also announced the launch of a new (old) third party framework that works with streaming video, one of the few use cases without an abstracted framework: GPUImage.

History of GPUImage

Back in 2012, Brad Larson introduced GPUImage to the world. GPUImage started as a sample project about GPU-accelerated live video filtering for Second Conf. So many people requested access to the sample code from that talk that Brad created a full open source project around it.

The project was wildly popular and many people incorporated it into their projects. But then in 2014, Apple introduced Swift. Over the years, many people moved away from wanting to use Objective-C frameworks. Additionally, there are a lot of language features introduced in Swift that cut down on a lot of code within GPUImage. As such, in 2016, Brad introduced GPUImage 2.

GPUImage 2 was leaner than GPUImage, but it was still very backward compatible. With the open sourcing of Swift, Brad felt it was important to keep the rendering pipeline in OpenGL ES to make GPUImage cross platform. He was able to get GPUImage 2 working on a Raspberry Pi using Linux.

With the recent WWDC announcement of the deprecation of OpenGL on the Mac, the writing is on the wall for OpenGL. However, the writing is not on the wall for GPUImage as a framework. GPUImage has evolved in the past and it will continue to evolve to account for evolutions in the Apple developer ecosystem.

Why is GPUImage Still Relevant?

From one perspective, it might make sense to retire GPUImage. When it was introduced, there was no framework for image processing on iOS as Core Image was Mac-only. Even after Core Image came to iOS, it did not initially allow for the user to write their own image processing kernels. Eventually this functionality was introduced. As of iOS 11, you can even write your own kernels in Metal and not the Core Image kernel language.

However, I believe there are several good reasons to update GPUImage for Metal.

The main reason, in my mind, is that people still use the framework. Many applications have been built on GPUImage. I feel it’s important to continue to support those users by making sure their applications don’t suddenly stop working whenever Apple throws the switch on OpenGL ES for iOS.

Another reason is that, despite all of the advances made to the Cocoa ecosystem, there is still not an abstracted framework for live video filtering. Capturing and filtering frames of live video requires you to dig into AVFoundation to capture each frame, convert it to a texture, add it to a processing pipeline, write shaders, and then output it. This year I have had multiple people contact me about doing GPU-accelerated live video filtering. There isn’t really a reason to not have an abstracted framework around this boilerplate code and GPUImage fulfills that niche.

The last important reason, in my mind, is the shader library. Many people are interested in learning about shaders and there simply isn’t a lot of great learning materials for people who are just starting out with shaders. There are 3D math books that explain what vectors are without any context as to how they are used. Most of the GLSL books I have found spend half the book explaining OpenGL specific state pipeline stuff that doesn’t exist in Metal.

The filter library has a wide range of image processing shaders in varying degrees of complexity. It’s an invaluable resource for beginning shader developers to reference to see how a shader is assembled. There are sites like Shader Toy that are way too complicated for a beginner to figure out.

You can use Core Image to quickly get image filtering up and running, but the logic isn’t exposed to the programmer. You can’t really use them to learn how to make complex shaders.

I believe having a framework full of publicly exposed shaders that allows you to easily set up live video filtering is important. I want to see GPUImage weather the transition away from OpenGL. I want people who are interested in Metal to have access to a working project to learn from.

It is my intention to create a series of blog posts about the functionality of most of the shaders. Some shaders are quite similar and may be covered under a single blog post. But it’s my intention for the reader to be able to understand that the shaders are not magic and to be able to think algebraically so they can approach making their own shaders with the right tools.

GPUImage was one of the things that brought Brad and I together. I knew who he was because of GPUImage and I reached out to him because of my interest in graphics programming. I am very happy that he felt comfortable handing the reigns to his framework over to me to shepherd it into the Metal ecosystem. I would like to be a good steward of this framework so that it can continue to be a useful asset and tool for the people who come after me.

Leave a Reply

Your email address will not be published. Required fields are marked *