This is the first in a series of blog posts about the math behind image manipulation filters used in GPUImage 3. I am hoping that these posts will give the reader a good foundation around common shader functions that can be used to build more complex shaders.

One of the most basic color manipulation functions is color inversion. Color inversion required just a single input: the original pixel color. The color is inverted and the value is returned to the rest of the rendering pipeline.

How Does Color Inversion Work?

In order to work with and understand shaders and image/graphics manipulation, you need to understand how the computer processes images.

If you’ve ever worked with Photoshop, you’ve probably worked a bit with the color tools. You were given options for color choice based on a red, green, and blue value between 0 and 255. Most of the visible color spectrum we can see can be expressed as a mixture of red, green, and blue. Further, most of those values can be expressed using 8 bits of data for the value of each color component. 8 bits represents 256 values, hence the value between 0 and 255. ## Color Representation in Metal

In the Metal Shading Language, the fragment function returns a `half4` value. `half4` is a four element data structure composed of floats at half precision. A regular float has 32 bits of precision and a half float has 16 bits of precision. Metal is natively optimized for 16 bit data types, so use those when possible.

You might be wondering why we have a `half4` if we only have red, green, and blue values. The final value is for the alpha channel, which controls the opacity of the color output.

Apple’s Cocoa frameworks represent colors as a percentage between 0.0 and 1.0. This means that to get the inverse of the percentage of each color, you simply need to subtract the value from 1.0. You can take my word for it, or we can look over a few simple examples of this in practice.

White is created by outputting 100% of red, green, and blue. This is represented as:

```half4 = (1.0, 1.0, 1.0, 1.0);
```

Subtract 1.0 from each of those values and you wind up with:

```half4 = (0.0, 0.0, 0.0, 1.0)
``` That example is pretty easy and self explanatory. Let’s look at a slightly more complex example. Let’s invert blue. To have pure blue on the screen, you have 100% blue and 0% green and red:

```half4 = (0.0, 0.0, 1.0, 1.0)
```

Each of these values is subtracted from one:

Red = 1.0 – 0.0 = 1.0
Green = 1.0 – 0.0 = 1.0
Blue = 1.0 – 1.0 = 0.0

The inverted blue value is:

```half4 = (1.0, 1.0, 0.0, 1.0)
```

This results in yellow: So far all of these examples have been of either 0% or 100%. Does this still work at values in the middle? Absolutely.

```half4 = (0.5, 0.5, 0.5, 1.0)
```

This is gray. The inversion of gray should stay exactly the same. Let’s try it out:

Each of these values is subtracted from one:

Red = 1.0 – 0.5 = 0.5
Green = 1.0 – 0.5 = 0.5
Blue = 1.0 – 0.5 = 0.5

As you can see, none of these values changed, which is as it should be:

```half4 = (0.5, 0.5, 0.5, 1.0)
``` For GPUImage, we didn’t create a separate vertex shader for every fragment shader. Many classes of shaders need the same inputs, so we set up a single vertex shader for all fragment shaders that require a single input. The output value for this single input is `SingleInputVertexIO`:

```struct SingleInputVertexIO
{
float4 position [[position]];
float2 textureCoordinate [[user(texturecoord)]];
};
```

Each of our single input shaders requires the current vertex position and the coordinate of the texture. This is the output of the single input vertex function:

```vertex SingleInputVertexIO oneInputVertex(
device packed_float2 *position [[buffer(0)]],
device packed_float2 *texturecoord [[buffer(1)]],
uint vid [[vertex_id]])
{
SingleInputVertexIO outputVertices;

outputVertices.position = float4(position[vid], 0, 1.0);
outputVertices.textureCoordinate = texturecoord[vid];

return outputVertices;
}
```

The vertex function is pulling in the position and texture that were encoded into the buffers on the CPU side. Since most of our processing will happen in the individual fragment shaders, the purpose of this vertex shader is to basically pass the current frame to the fragment function.

The final color inversion shader from GPUImage is here:

```fragment half4 colorInversionFragment(
SingleInputVertexIO fragmentInput [[stage_in]],
texture2d inputTexture [[texture(0)]])
{
half4 color = inputTexture.sample(
fragmentInput.textureCoordinate);

return half4((1.0 - color.rgb), color.a);
}
```

The fragment function has two parameters:

• The current interpolated position
• The current texture

The texture tells us what image we’re processing and the position tells us which specific pixel the fragment shader will be processing.

First, we’re creating a sampler to sample from the texture. Next, we’re creating a variable to hold the current sample color by referencing the texture sampler at the specific position coordinate we received in the parameters.

Finally, we are doing our color inversion calculation. The first three values are the red, green, and blue values we are inverting. The final value is the alpha/opacity value. We do not want to invert that value, so that is simply passed through as is.

## Conclusion

Sorry if this post feels like beating a dead horse by stating the obvious. For me personally, I had to change the way I think about programming to grok how to create shaders. I found that breaking down a shader into these simple truths and components, it helped me to see that there was a reason this formula exists instead of just copying and pasting an algorithm online.

With graphics, everything is expressed mathematically. It’s important to realize that the people who wrote these algorithms were attempting to create an effect and had to think about how to accomplish that mathematically. These aren’t magic. Every shader I go through for the rest of this series builds on the ideas I express here.