CPU and GPU
A computer has two main components in it that “compute”: the CPU and the GPU.
The CPU (Central Processing Unit) is often called the brain of the computer. This is where all control and logic takes place; it’s good at following instructions, making decisions and branching, and performing complex calculations.
A defining feature of the CPU is that it runs sequentially. It can only do one thing at a time, but it is very powerful, so it can go through a sequence of computations very quickly.
The GPU (Graphics Processing Unit) is built specifically for multitasking, for running multiple calculations in parallel (at the same time). Instead of focusing on speed, the GPU is all about volume, containing thousands of processors. These are sometimes referred to as cores or threads. These are much less powerful than CPU cores.
Note that most modern CPUs are actually multi-threaded, meaning they can do some level of parallel programming. However, these numbers are still small compared to GPU cores. For example, the Intel Core Ultra and Apple M3 CPUs between 8 and 16 cores, whereas the latest NVIDIA RTX 4090 GPU has 16384 cores. You will also be surprised at the number of applications you use daily that only run on a single CPU core.
Most, if not all, of the programming we have done so far, has been on the CPU.
Most, if not all, of the programming we have done so far, has been sequential.
The GPU is particularly great for rendering graphics, where the goal is to get a lot of pixels on screen as fast as possible. Instead of rendering every pixel one at a time, we can render all pixels at the same time.
It is now time for the obligatory Mythbusters demo, which is probably one of the best visual examples of the power of GPUs.
Depending on the tasks we are trying to do on our computer, the CPU or the GPU might be the best tool for the job. An analogy I quite like is the one about being a farmer with 4 oxen and 1024 chickens, and having to pick the best group of animals for the job. Which would we use to clean a field? Which would we use to plow that field?
The CPU and GPU work together to draw graphics to the screen.
- The CPU sets up the scene, loads images, builds meshes, sets parameters (e.g. transformations, fill color), etc.
- The CPU hands over the data to the GPU.
- The GPU does the actual rendering, converting all that data to pixel colors on screen.
What are shaders?
Shaders are programs that run on the GPU. This means they are programs that run in parallel, and this is the reason why people say shaders are hard.
It takes a bit of time to get used to this as it requires a change in how we think about programming.
One way to look at it is like running a for-loop but the iterations run in random order, and some iterations might take longer than others.
- We do not control the order in which shader instances are run.
- Some shader instances will complete before others, and we do not know when each will be finished.
This makes calculations that are easy to do on the CPU very hard to do on the GPU. For example averaging or comparing values together.
Another hurdle with shaders is that they are difficult to debug. The pipeline is designed to send data from the CPU to the GPU, and it is very hard, sometimes impossible, to go the other way from the GPU to the CPU. We cannot print values to the console, we cannot pause a shader in the middle of execution to see what’s happening. The main thing we can do is look at the output on screen and try to understand where we went wrong if it doesn’t look right.
GLSL
Shader programs are written in their own language. In WebGL, this language is GLSL. This is a subset of C so if you’ve written anything in C++/openFrameworks or Java/Processing it should look familiar.
If you’ve only done JavaScript programming, you will notice that GLSL is much stricter on syntax. Variables are strongly-typed. This means every variable we declare needs a defined type. Statements must end with a semi-colon.
|
|
|
|
GLSL includes many variable types.
- Scalars:
float
,int
,bool
- Vectors:
vec2
,vec3
,vec4
- Matrices:
mat3
,mat4
,mat3x4
, … - Samplers:
sampler2D
,samplerCube
, …
Variables can also have qualifiers, which determine where their values come from and what their intended use is.
uniform
: Data that is shared between all instances of the program (all vertices).attribute
: Data that is specific to a single instance of the program (a vertex).varying
: Data that is specific to a single instance and passed between different shaders in the pipeline.
GLSL includes many built-in functions.
- Linear Algebra:
dot()
,cross()
,normalize()
, … - Trigonometry:
radians()
,sin()
,atan()
, … - Texture Sampling:
texture2D()
,textureCube()
, … - Interpolation:
mix()
,smoothstep()
,clamp()
, …
We will learn how to use all of these over the next few weeks. For reference, you can use the official OpenGL Reference or the function reference on Shaderific.
The shader pipeline
Shaders run in different steps, one after the other. These are sometimes called shader units.
- Some of these steps are mandatory while others are optional, but they always run in the same order.
- Each step is tailored to work on a specific input data set and is expected to output specific result data.
- The steps are run sequentially, and the data from a previous step can be used in the next step.
- Some units are only available on higher-end platforms and might not be compatible with WebGL.
Every WebGL sketch we have renderd so far has been using shaders. p5.js has default shaders that it uses under the hood, which are automatically loaded depending on the context we are drawing in. We can always override these and use our own custom shaders instead.
In p5.js, the p5.Shader
object holds our shader program. It can be created using loadShader()
and it can then be enabled using shader()
. The syntax is a bit confusing, but will make more sense with practice.
The two units we will look at today are vertex shaders and fragment shaders. The simplest shader we can write will require both of these steps.
Let’s add a shader to the last example from the previous class, where we drew a frame with color and position attributes.
We don’t see anything on screen yet but we will fix this shortly.
Note that there are two new files in the project: simple.vert
and simple.frag
.
simple.vert
contains the code for the vertex unit.simple.frag
contains the code for the fragment unit.
Vertex Shader
The vertex shader must have a main()
function and must set the value of the built-in variable gl_Position
.
gl_Position
is a 4D vector even though we are only working in 3D. There are a few reasons for this (which we’ll go over later), but for now just remember that we can just set the fourth value of a 3D position to 1.0
to make it 4D.
The vertex unit runs in parallel for every vertex in the mesh. In our example, this means that 10 instances of the program will run at the same time. The difference is that each instance will have its own value for the aPosition
variable, corresponding to whatever value we set to vertex()
when we built the mesh. The qualifier attribute
is used to indicate that this variable is in the vertex scope. It has a separate value per vertex. In p5.js, the value of aPosition
is automatically set when we call vertex()
.
We are converting this value to a 4D vector and setting gl_Position
but we are still not seeing anything on screen.
We can try to play with the x
and y
values of the position to try to figure out what’s going on. It might help to use a fixed canvas size and use the values for width
and height
in our calculations. Note that we can isolate components in vec2
, vec3
and vec4
types using the dot notation.
After a bit of trial and error, you will notice that the mesh is very big and off-center.
gl_Position
expects the values it receives in clip space. Clip space is a normalized space, meaning that it ranges from -1
to 1
in both dimensions. We need to remap our coordinate system from [0, 400]
to [-1, 1]
.
You will also notice that commenting out the call to translate()
in the sketch does not seem to have any effect. This makes sense when examining the shader closely. We are only using the values of the position attributes in our program, we don’t have any other input!
Model View Projection
Remember from our last class that all the calls to translate()
, rotateX/Y/Z()
, and scale()
get added into a transformation matrix. We need to get the values of that matrix in our shader in order to use it.
There are actually 3 transformation matrices we need to take into account:
- The model matrix converts our positions from local space to world space. This is where the calls to
translate()
,rotateX/Y/Z()
, andscale()
get stacked. - The view matrix converts our world space positions to camera space. Even though we haven’t explicitly added a camera to our sketch, there is a default built-in camera that is “looking” at our scene, and it happens to be perfectly aligned with the canvas. (We will look at how to use our own cameras later). This matrix makes sure all the points are in the camera’s point of view.
- The projection matrix converts our camera space points to screen space. This is where all our 3D data gets converted to 2D, taking perspective, field of view, and other parameters into consideration.
Combined together, these make up the model view projection. This article by Jordan Santell does a great job of visualizing how this all works.
In p5.js, these matrices are automatically set in the variables uModelViewMatrix
and uProjectionMatrix
. The model and view matrices are combined, which is often the case in graphics frameworks for performance reasons. The qualifier uniform
is used to indicate that these variables are in the shader scope. They have the same value for every shader instance, i.e. for all vertices.
We simply need to multiply the two matrices with the position in order to get the correct clip space value. Note that the call to translate()
in the sketch works as expected now. You don’t have to understand exactly how these work, but be aware of what they represent and how they are used.
p5.js also has a p5.Camera
which we can use as a virtual camera.
- Changing the camera’s position and direction with
pan()
,tilt()
,lookAt()
will change the view matrix. - Changing the camera’s geometry and “lens” with
perspective()
andortho()
will change the projection matrix.
We will take a closer look at cameras when we start using three.js.
Fragment Shader
You probably noticed that the colors from the calls to fill()
are not coming through. This is where the fragment shader comes in. It must also have a main()
function but must set the value of the built-in variable gl_FragColor
.
The fragment code alse needs a default precision specifier at the top of the file. This tells the GPU how much precision to use in its calculations. This can be any of highp
, mediump
, or lowp
, but note that highp
is not always available on mobile and web platforms. In general, we can just use mediump
and not worry about it unless we get unexpected results.
A fragment is a “possible” pixel, but we can assume it’s the same thing as a pixel for simplicity. The fragment unit runs in parallel for every fragment in the mesh, i.e. every pixel that is drawn to the screen to render this mesh. We may have the same number of fragments as vertices when using the GL_POINTS
topology, but otherwise we will have much more fragments to render since we are filling in all the space in between the vertex positions. The fragment position has already been set in the vertex shader, so all that is left to do is to assign it a color.
Colors in GLSL are normalized, ranging from 0.0
to 1.0
. They are also structured as a vec4
representing the RGBA channels.
p5.js automatically sets the color attribute in the aVertexColor
variable. However, attributes are only availabe in the vertex shader, so we need a way to pass this value between the vertex and fragment units. This can be achieved by using the varying
qualifier on a new variable. We can write to a varying
variable in the vertex shader, and read its value in the fragment shader, as long as both variables have the same name.
Remember that all values are interpolated to fill in the gaps in data. Although we have many more fragment passes than vertex passes, each fragment still gets a color value from the varying
variable. This is a blended value between the nearest points, which is why we get gradients across the mesh.
The p5.js WebGL Architecture document is a good reference outlining what uniforms and attributes are automatically set by the framework.
Uniforms
We can pass our own custom uniforms to the shader using the p5.Shader
setUniform()
function. This can be anything we want so this is where we can start to get creative.
We can move the translation to the origin inside the vertex shader. The shader does not know what the canvas size is, but we can pass it in as a uniform. We will follow convention and prefix the name with u
.
We can also pass uniforms directly to the fragment shader, without having to go through varying
variables. This is because uniforms are the same for all vertices and therefore do not interpolate. Let’s pass in the elapsed time since the sketch started as uTime
. Because this value changes every frame, we will be able to animate our shader program.
Exercise
Remix the previous sketch to make the frame follow the mouse.
- The mouse cursor should be in the middle of the frame.
- Do not change the mesh in the JS file, apply the transformation in the shader.
- You will want to make the frame smaller. Try adjusting the
paddingX
,paddingY
, andthickness
values in the sketch.
How would you make sure the shape is always a square, for all window resolutions?
- Apply the transformation in the shader without adding new uniforms.
How would you warp the shape as it gets near the window edges?
- Apply the transformation in the shader without adding new uniforms.
How would you make that warp uniform so that the shape is always a square?
- Apply the transformation in the shader without adding new uniforms.