In the last class, we hinted briefly at how we can manipulate the texture coordinates in the fragment shader when we looked at textureWrap()
options. Remember that texture coordinates are just two numbers, a vec2
, so there is lots we can do with them in the fragment shader to get interesting effects, sometimes even without using an actual texture to sample.
Quantization
We can use the webcam as our input texture to add a bit of motion to our sketch. This is as simple as setting up the webcam with createCapture()
, and passing the webcam object as a uniform sampler2D
to our shader. Note that we are not passing the entire webcam object into the shader; p5.js automatically extracts the current frame of video and passes it as a texture to the shader.
The first effect we will try is to pixelate the image. There are a couple of ways to do this:
 The simple way is to sample the same point in the video for many neighboring points on screen.
 The harder way is to average many neighboring points in the video, and set the average color to the corresponding points on screen.
Let’s attempt the simple version. This concept of limiting values to a smaller defined set is called quantizing. If we think about this in pseudocode, we want to map a cluster of texture coordinates to one value, and then sample the video texture using this single texture coordinate. One way to do this is to take away some precision in the texture coordinates, so that a sequence of them end up being equal. Take a look at the following example, where we will only look at 1 decimal place and discard everything smaller.
0.0134 => 0.0
0.0232 => 0.0
0.0754 => 0.0
0.1000 => 0.1
0.1564 => 0.1
0.2999 => 0.2
0.3119 => 0.3
0.3487 => 0.3
If we use the 1 decimal place version as our texture coordinate, we will end up with the same value for all coordinates 0.0
to 0.1
, 0.1
to 0.2
, 0.2
to 0.3
, etc.
In GLSL, the mod()
operator can help us with this calculation. mod()
is similar to the modulo %
operator, except it does not only work on integers, it can be used with floats too! mod()
will give us the decimal places we want to ignore. We just need to subtract those from the original texture coordinate to get our modified version.
mod(0.0134, 0.1) = 0.0134
mod(0.0232, 0.1) = 0.0232
mod(0.0754, 0.1) = 0.0754
mod(0.1000, 0.1) = 0.0000
mod(0.1564, 0.1) = 0.0564
mod(0.2999, 0.1) = 0.0999
mod(0.3119, 0.1) = 0.0119
mod(0.3487, 0.1) = 0.0487
//
0.0134  0.0134 = 0.0
0.0232  0.0232 = 0.0
0.0754  0.0754 = 0.0
0.1000  0.0000 = 0.1
0.1564  0.0564 = 0.1
0.2999  0.0999 = 0.2
0.3119  0.0119 = 0.3
0.3487  0.0487 = 0.3
Here it is in action in the fragment shader. This sketch uses createSlider()
to add an HTML slider to the page,which is used to tweak the size of the pixel in the program. Note that our texture coordinates are normalized, so this value must be in the range [0.0, 1.0]
.
We can selectively pixelate parts of the image by calculating different pixelate
values for different fragments. To do this, we can generate a mask over the image and use the mask to determine how much to pixelate a fragment: a black 0.0
value means don’t pixelate at all, a white 1.0
value means pixelate at the maximum level, and anything in between is some value in between, relative to the gray level of the mask.
The following sketch uses the mouse position to set the center of the mask, and an HTML slider to set its size. These parameters are converted from p5.js space to normalized space in the vertex shader, then passed to the fragment shader using a varying
variable. This calculation could have been made in the fragment shader, but the shader will run faster if it’s done in the vertex shader because there are a lot less vertices than fragments to process.
The fragment shader uses the distance()
function to calculate the distance between the mask center and the texture coordinate, and the smoothstep()
function to get a gradient value between 0.0
and 1.0
.
0.0
means the texture coordinate is right on the mask center point. This value increases until it reaches
1.0
on the edge of the mask.  A value above
1.0
means the fragment is out of bounds, which we don’t need to worry about for this particular program.  We want the closest point to have the effect at its fullest, so we invert the mask with
1.0  val
.
If we make the radius small enough and move the mouse to the corners, you will notice that the mask center does not quite correspond to the mouse cursor. We normalized the mouse coordinates using uResolution
as the range, but this is incorrect. uResolution
corresponds to the entire sketch window, not just the rectangle we are rendering using the shader. An easy fix for this is to remap the mouse coordinates when we pass them in as a uniform.
Now that we have an accurate mask, we can use its value to set a pixelation value between 0.0
and uPixelate
. An easy way to do this is to use a similar algorithm to the map()
function in p5.js. However, there is no equivalent built into GLSL, so we need to write it ourselves.
While this looks interesting, it’s not exactly what we were aiming for. It looks like the pixelation is following a circular pattern, which makes sense considering our distance calculations and mask are both circular in shape as well.
The trick here is to use the quantized texture coordinate in the distance calculation. This will make our distance values also increment in steps and our pixelated blocks will be squares.
Randomness
When used purposefully, adding randomness to a sketch can lead to very interesting results. While some form of random()
function exists in most programming languages, there is no implementation in GLSL. However, a quick Google search will return many results with the following:


This is a standard implementation that is used in many GLSL programs. This function and the numbers used are well explained and visualized in the Book of Shaders, but what is essentially happening is that we are taking a cyclical value (a sin()
wave), multiplying it by a large value to get more varied results, and then keeping only the decimal part of the large number using fract()
.
If we just draw this value out in our fragment, our image looks like static.
We can use our quantizing mod()
trick here to enlarge our cells. This reveals that each call to random()
returns a gray value between 0.0
black and 1.0
white. Note also that calling random()
with the same parameter (in this case our quantized texture coordinate) returns the same value. This is why this is called a pseudorandom function; it looks like a random number generator but it is actually deterministic (predictable).
Let’s add an elapsed time uniform to the shader to bring in animation. We will use the time to change the random()
parameter for each frame rendered. Using the time value directly is too chaotic and fast, it needs to be slowed down to give us a good looking result.
This value can then be used like a regular input in our shader. Here is a simple example where the random value is used as a threshold to draw a checkerboard video.
Other useful random number generators are noise()
functions. Unlike random()
, noise generators are continuous over the domain. This means that if we call a noise()
function with arguments that are close to each other, the results will also be close to each other. This is well illustrated by this openFrameworks example:
There are many different types of noise generators, with the most famous one being Perlin noise. Once again, this is well explained and illustrated in the Book of Shaders and is well worth a read.
Although some noise()
functions are implemented in OpenGL Core, they are not available in OpenGL ES or WebGL, so we must include our own custom versions.
We can add a simplex noise implementation to our fragment shader and draw the value out to see what it looks like. We don’t really need to worry about how the noise function works, just remember that we call it with snoise()
and pass in a vec2
as a parameter. The result will be a number between 1
and 1
.
In this example, we are using two uniform variables to scale our result:
uNoiseScale
modifies how big of a step we take when sampling the noise function. A bigger step will give us more variance (more ups and downs), and a smaller step will give us more resolution.uOffsetScale
modifies the result of the noise sampling. A bigger value will give us more variance between the peaks and troughs.
Let’s bring the webcam texture back in, and use the noise to offset our texture coordinate when sampling the video. Note that we have greatly reduced the range of uNoiseScale
in this example to avoid getting too much tiling of the image.
Here again we can bring in the time element to animate the effect. Adding the time to the snoise()
offset results in a wobbly video effect. If we uncomment the last line of the fragment shader, we can see how all the parameters affect the sampled noise value.