Tools and Frameworks

We have been focusing on WebGL for most of the course but this is just a small part of the world of shaders. As shaders are the language for programming graphics, most tools and frameworks have some type of interface for working with the GPU. Different OpenGL versions and platforms each have their own unique GLSL commands, and other systems like DirectX use a completely different language. However, the concepts under the hood are all very similar, so it is just a matter of getting familiar with the syntax to leverage the power of the GPU.

Let’s explore a few different platforms and explore this first hand.

Processing

Processing is the precursor to p5.js and therefore uses a lot of similar syntax. Processing is written in Java and predominantly runs on Desktop and Android. Similarly to how p5.js needs a WEBGL canvas to run WebGL code, Processing needs a P2D or P3D renderer to run OpenGL code.

1
2
3
4
void setup() 
{
  size(640, 360, P3D);
}

Shaders are loaded inside a PShader object using loadShader(). The shader is enabled using the shader() function, and everything drawn after the call will use the enabled shader program.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
PShape sphereShape;
PShader normalShader;

float angle = 0.0;

void setup() 
{
  size(640, 360, P3D);
  
  normalShader = loadShader("normalFrag.glsl", "normalVert.glsl");
  
  sphereShape = createShape(SPHERE, 120);
  sphereShape.setFill(color(255));
  sphereShape.setStroke(false);
}

void draw() 
{
  background(0);
  
  translate(width/2, height/2);
  rotateX(angle);
  rotateY(angle);
  
  shader(normalShader);
  shape(sphereShape);
  
  angle += 0.01;  
}

By default, shaders in Processing use OpenGL 2.0. This is pretty much equivalent to WebGL 2, which is what we have been using in p5.js and three.js. The built-in uniforms and attributes use a different naming convention, without any prefixes like u or a so that’s something to pay attention to. The position attribute also already comes in as a vec4 so we can use it directly in our MVP transformation.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
uniform mat4 modelview;
uniform mat4 projection;

attribute vec4 position;
attribute vec3 normal;

varying vec3 vNormal;

void main()
{
  gl_Position = projection * modelview * position;
  vNormal = normal;
}

Processing also passes the entire MVP as the transform uniform, which can be used in a simple matrix multiplication to calculate our clip position.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
uniform mat4 transform;

attribute vec4 position;
attribute vec3 normal;

varying vec3 vNormal;

void main()
{
  gl_Position = transform * position;
  vNormal = normal;
}

Download the project here.

For more information, the Processing Shaders Tutorial is a good starting point. (This tutorial went offline at the time of this writing, but can still be accessed through the wayback machine.)

openFrameworks

openFrameworks (OF) is an open source cross-platform creative toolkit. Like p5.js, is also originally based on Processing, but has taken its own shape as it has developed over the years. OF is written in C++, which is a lower level language than Java or Javascript. This has the advantage of giving the programmer much more control, but can also be more complicated as there are more ways to break things and more things to manage. For more information, you can read through the Intro to openFrameworks page on the Seeing Machines course site.

openFrameworks also uses OpenGL for rendering. Programmers have the option to set the OpenGL version to use for each application. Many OF users prefer to use the programmable pipeline (OpenGL 3.0 and up) as it allows more control when rendering meshes and graphics. This can be set in the main() function, which is the entry point into the program.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#include "ofMain.h"
#include "ofApp.h"

int main() 
{
  ofGLFWWindowSettings settings;
  settings.setGLVersion(3, 3);
  settings.setSize(1280, 720);
  ofCreateWindow(settings);
  ofRunApp(new ofApp());
}

openFrameworks uses ofMesh to generate and render geometry, and ofShader to compile and bind shader programs. For more information, the ofBook Introducing Shaders section is a good introduction.

OpenGL 3.x+ shaders need their version defined at the top using a #version ### command. This tells the shader compiler which features we are planning on using, and these can be cross-checked with the version of the API we are using on the CPU side of the app. When looking up variables and functions in the GLSL reference, compatible versions are listed at the bottom of the page.

The OpenGL and GLSL version numbers don’t match up until version 3.30, so you will need to use a table like the following to find corresponding values.

OpenGL Version GLSL Version
2.0 1.10
2.1 1.20
3.0 1.30
3.1 1.40
3.2 1.50
3.3 3.30
4.0 4.00
4.1 4.10
4.2 4.20
4.3 4.30
4.4 4.40
4.5 4.50

To keep things simple, I would suggest sticking with #version 330 for OpenGL 3.3 as that will be compatible on most Desktop platforms.

Programmable pipeline shaders do not use the attribute and varying qualifiers. Instead, they use in to represent values that are input into the shader unit, and out for values that are output out of the shader unit.

In the vertex shader:

  • attributes use the in qualifier.
  • varyings use the out qualifier.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#version 330

// OF built-in uniforms.
uniform mat4 modelViewProjectionMatrix;

// Custom uniforms.
uniform float uResolution;
uniform float uDisplacement;

// OF built-in attributes.
in vec4 position;
in vec3 normal;

// Custom varyings.
out vec4 vColor;

void main()
{
  vec4 modelPos = position;
  modelPos.xyz += normal * sin((normal.x + normal.y + normal.z) * uResolution * 100) *   uDisplacement;
  
  gl_Position = modelViewProjectionMatrix * modelPos;
  
  vColor = vec4(normal * 0.5 + 0.5, 1.0);
}

In the fragment shader:

  • varyings use the in qualifier. Their name must still match the out of the vertex shader.
  • there is no gl_FragColor variable. An out vec4 variable must be declared to set the output color. This variable can be named whatever you want.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#version 330

// Custom varyings.
in vec4 vColor;
out vec4 fragColor;

void main()
{
  fragColor = vColor;
}

Download the “Normals” project here.

Transform Feedback

Meshes are an abstraction of something called a vertex buffer object or VBO. A VBO is simply a data structure that stores vertex information. Up until now, we have filled VBOs on the CPU, then uploaded them to the GPU for shader processing and rendering.

OpenGL for desktop includes the transform feedback extension, which allows us to write to VBOs in our vertex shaders. These VBOs can be used as geometry input to different parts of the program, and can even be mapped back to a CPU data structure, meaning we can recover computations made on the GPU! This concept is similar to how we have been using offscreen render buffers and textures for multi-pass image effects, except in this case we are manipulating and retrieving meshes directly.

When setting up an ofShader, we can use ofShader::TransformFeedbacksettings to tell it which varyings we want to capture in our buffer. In this case, we will capture the position and color, both calculated in the vertex shader. Note that the shader only requires a vertex unit, as we are not drawing anything to the screen and do not need a fragment shader.

 9
10
11
12
  auto normalSettings = ofShader::TransformFeedbackSettings();
  normalSettings.shaderFiles[GL_VERTEX_SHADER] = "shaders/normal.vert";
  normalSettings.varyingsToCapture = { "vPosition", "vColor" };
  normalShader.setup(normalSettings);

We then bind the buffer as we enable the shader, so that whatever geometry we output gets saved into the buffer.

36
37
38
39
40
41
42
normalShader.beginTransformFeedback(GL_TRIANGLES, sphereBuffer);
{
  normalShader.setUniform1f("uResolution", resolution);
  normalShader.setUniform1f("uDisplacement", displacement);
  sphereMesh.draw();
}
normalShader.endTransformFeedback(sphereBuffer);

The VBO can then be used like a mesh or any other geometry data. In OF, we can bind the data to an ofVbo to render it in another pass. The feedback data will be interleaved, meaning all the position and color data will be packed into the same array, one vertex after the other: XYZWRGBA XYZWRGBA XYZWRGBA .... We can map that data in our VBO by setting the stride and offset.

  • The stride is the space that every single vertex takes up in the array. In our case, this is 8 float (or 2 vec4), one for the position and the other for the color.
  • The offset is the memory offset from the start of the vertex memory where this attribute’s data starts. This is 0 for the position, as it is the first attribute, and the number of bytes a vec4 takes up for the color, as it is the second attribute, right after the position.
47
48
  sphereVbo.setVertexBuffer(sphereBuffer, 4, sizeof(glm::vec4) * 2, 0);
  sphereVbo.setColorBuffer(sphereBuffer, sizeof(glm::vec4) * 2, sizeof(glm::vec4));

The ofVbo can then be drawn to the screen like any other geometry. Note that the data data never leaves the GPU, we are just telling the VBO where in memory to find the attributes it need to render the transformed geometry, so this can run very fast.

54
55
56
57
58
59
60
61
62
63
  camera.begin();
  ofEnableDepthTest();
  {
    ofRotateXDeg(rotationAngle);
    ofRotateYDeg(rotationAngle);

    sphereVbo.draw(GL_TRIANGLES, 0, sphereVbo.getNumVertices());
  }
  ofDisableDepthTest();
  camera.end();

It is also possible to download the data back to the CPU. This can be useful, for example, to generate geometry and save it out as a 3D model. ofBufferObject::map() will map the GPU data into a CPU array. This data can then be copied to an ofMesh and easily be saved out as a PLY model using the ofMesh::save() function.

71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
void ofApp::keyPressed(int key)
{
  if (key == ' ')
  {
    sphereBuffer.bind(GL_ARRAY_BUFFER);
    glm::vec4 * verts = sphereBuffer.map<glm::vec4>(GL_READ_ONLY);
    ofMesh saveMesh;
    for (int i = 0; i < sphereVbo.getNumVertices(); ++i)
    {
      saveMesh.addVertex(verts[i * 2 + 0]);
      saveMesh.addColor(ofFloatColor(verts[i * 2 + 1].r, verts[i * 2 + 1].g, verts[i * 2 + 1].b, verts[i * 2 + 1].a));
    }
    saveMesh.save(ofGetTimestampString("%Y%m%d-%H%M%S") + ".ply");
    sphereBuffer.unmap();
  }
}

Download the “Feedback” project here.

Geometry Shaders

The geometry shader is an optional shader that sits between the vertex and fragment shaders. This shader allows creating new geometry on-the-fly using input it receives from the vertex shader. Geometry shaders are useful because they can reduce the amount of data transferred from the CPU to GPU.

 9
10
11
12
13
  auto lineSettings = ofShaderSettings();
  lineSettings.shaderFiles[GL_VERTEX_SHADER] = "shaders/line.vert";
  lineSettings.shaderFiles[GL_GEOMETRY_SHADER] = "shaders/line.geom";
  lineSettings.shaderFiles[GL_FRAGMENT_SHADER] = "shaders/line.frag";
  lineShader.setup(lineSettings);

They can also convert the topology from the input to the output render. For example, we can write a geometry shader to draw lines representing a mesh’s vertex normals. This would take in a GL_POINTS topology and output GL_LINES.

When using a geometry shader, the vertex shader does not set a value in gl_Position. This is now set in the geometry shader. Any varyings the vertex unit generates will be accessible in the geometry unit.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#version 330

// OF built-in attributes.
in vec4 position;
in vec3 normal;

// Custom varyings.
out vec4 vPosition;
out vec3 vNormal;

void main()
{
  vPosition = position;
  vNormal = normal;
}

The geometry program has special layout commands at the top that indicate the inputs and outputs.

  • layout (...) in; indicates the input geometry and layout (...) out; indicates the output geometry.
  • Possible values for the input topology are points, lines, or triangles.
  • Possible values for the output topology are points, line_strip, or triangle_strip.
  • The output command also needs to include a max_vertices value, which indicates the number of vertices that will be output when the geometry shader runs.
1
2
3
4
#version 330

layout (points) in;
layout (line_strip, max_vertices = 2) out;

The varyings come into the geometry shader as arrays. The number of elements in the array depends on the input topology. points will produce just 1 element, lines will produce 2 elements, and triangles will produce 3 elements. Having access to all vertices that are part of the “face” can be useful in the calculations we make.

 6
 7
 8
 9
10
11
12
13
14
15
16
// OF built-in uniforms.
uniform mat4 modelViewProjectionMatrix;

// Custom uniforms.
uniform float uNormalLength;

// Custom varyings.
in vec4 vPosition[];
in vec3 vNormal[];

out vec4 vColor;

Vertices are output by setting the value of varyings, then calling EmitVertex(). This will push out the values as a vertex, for further processing in the fragment shader. We can also call EndPrimitive() after emitting vertices if we want to separate our faces. For example, if we are using the triangle_strip output topology but want to output separate triangles, we can call EndPrimitive() after every 3 calls to EmitVertex() to reset the strip.

The output varyings have to be set for each vertex. In our example, this means the built-in gl_Position and the custom vColor both need to be set.

18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
void main()
{
  vec4 pos1 = vPosition[0];
  vec4 pos2 = vPosition[0] + vec4(vNormal[0], 0.0) * uNormalLength;

  vec3 normalCol = vNormal[0] * 0.5 + 0.5;

  gl_Position = modelViewProjectionMatrix * pos1;
  vColor = vec4(normalCol, 1.0);
  EmitVertex();

  gl_Position = modelViewProjectionMatrix * pos2;
  vColor = vec4(normalCol, 0.0);
  EmitVertex();

  EndPrimitive();
}

The fragment shader remains unchanged. It just needs to output a vec4 varying for the pixel color.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#version 330

// Custom attributes.
in vec4 vColor;
out vec4 fragColor;

void main()
{
  fragColor = vColor;
}

We can also add an option to calculate and draw face normals, by changing the input geometry from points to triangles, and taking the average of the 3 vertex points that form a triangle to calculate the face normal.

1
2
3
4
#version 330

layout (triangles) in;
layout (line_strip, max_vertices =4) out;
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
void main()
{
  vec4 pos1 = vec4(0.0, 0.0, 0.0, 1.0);
  vec3 norm = vec3(0.0);
  for (int i = 0; i < 3; ++i)
  {
    pos1.xyz += vPosition[i].xyz;
    norm.xyz += vNormal[i];
  }
  pos1.xyz /= 3.0;
  norm /= 3.0;
  vec4 pos2 = pos1 + vec4(norm, 0.0) * uNormalLength;

  vec3 normalCol = vNormal[0] * 0.5 + 0.5;

  gl_Position = modelViewProjectionMatrix * pos1;
  vColor = vec4(normalCol, 1.0);
  EmitVertex();

  gl_Position = modelViewProjectionMatrix * pos2;
  vColor = vec4(normalCol, 0.0);
  EmitVertex();

  EndPrimitive();
}

Download the “Lines” project here.

Unity

Unity is a cross-platform tool used primarily for making games, but that can also be used for graphics applications and interactive installations. Unity has a graphical user interface for its scene graph which makes it easy to use, giving the programmer visual feedback as they build their applications.

Unity rendering works similarly to three.js. A mesh is added to the world and a material is applied to the mesh. The material can use a built-in shader or a custom shader to render the mesh.

Shaders in Unity are written in a language called ShaderLab. A ShaderLab file consists of many parts.

Shader is the top-level group that encapsulates all the other sections. The shader name is defined here.

1
2
3
4
Shader "Shader-Time/Normal-Unlit"
{
  ...
}

A Properties section at the top lists the uniforms used in the file. Using special syntax to define their type, default value, and range, these are automatically exposed in the GUI and can be edited on-the-fly.

3
4
5
6
7
Properties
{
  _Resolution("Resolution", Range(0, 100)) = 1
  _Displacement("Displacement", Range(0, 1)) = 0.1
}

A Subshader section has all the parameters and code for a shader.

  • This includes properties like blending and culling, as well as the code for all the shader units (vertex, fragment, geometry, etc).
  • Different shader passes can run in a subshader, each inside a Pass section. By default all passes will run in order, but you can also call specific passes directly using advanced commands.
  • The code is written between CGPROGRAM and ENDCG commands.
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
SubShader
{
  Tags { "RenderType"="Opaque" }
  LOD 100

  Pass
  {
    CGPROGRAM

    ...

    ENDCG
  }
}

The shader code is written in Cg/HLSL. This is a different language than GLSL with its own syntax, but most of the concepts remain the same.

One difference you will notice is the data types.

  • vec2, vec3, vec4, mat4 become float2, float3, float4, float4x4.
  • float is sometimes replaced by half (a half resolution float) or fixed (a fixed precision decimal number) for variables that do not require full precision values.

Attributes and varyings are organized into a struct. They also are tagged with semantics, which indicate their intended use. Semantics are used when referencing data. For example, we upload data to the POSITION and NORMAL slots in the mesh from the CPU, and we don’t need to know that the variables are called position and normal on the GPU (they could in fact be called anything). The SV_POSITION semantic in the vertex output represents the clip space position, equivalent to gl_Position in GLSL. For any custom data, we can use any of the TEXCOORD# semantics.

23
24
25
26
27
28
29
30
31
32
33
struct VertInput
{
  float4 position : POSITION;
  float3 normal : NORMAL;
};

struct VertInterpolators
{
  float4 position : SV_POSITION;
  float4 normalColor : COLOR;
};

Vertex and fragment functions are part of the same code block.

  • #pragma vertex XXX and #pragma fragment XXX commands are used to define which functions are used for which unit. (The XXX is the name of the function.)
  • The functions take in a struct with the attributes or varyings as their argument, and have the output as their return value.
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
CGPROGRAM

#pragma vertex VertProgram
#pragma fragment FragProgram

#include "UnityCG.cginc"

struct VertInput
{
  float4 position : POSITION;
  float3 normal: NORMAL;
};

struct VertInterpolators
{
  float4 position : SV_POSITION;
  float4 normalColor: COLOR;
};

uniform float _Resolution;
uniform float _Displacement;

VertInterpolators VertProgram(VertInput i)
{
  VertInterpolators o;
  float4 modelPos = i.position;
  modelPos.xyz += i.normal * sin((i.normal.x + i.normal.y + i.normal.z) * _Resolution) * _Displacement;
  o.position = UnityObjectToClipPos(modelPos);
  o.normalColor = float4(i.normal * 0.5 + 0.5, 1.0);
  return o;
}

fixed4 FragProgram(VertInterpolators i) : SV_Target
{
  return i.normalColor;
}

ENDCG

Geometry Shaders

Geometry shaders can be built in a similar fashion.

A struct is defined for varyings out of the geometry unit. This will be the new argument for the fragment function.

38
39
40
41
42
struct GeomInterpolators
{
  float4 clipPos : SV_POSITION;
  fixed4 color : COLOR;
};

A function is defined for the geometry shader.

  • A #pragma geometry XXX command is added to define the geometry function.
  • The function’s first argument is the input geometry. It is tagged with a topology like triangle or point.
  • The function’s second argument is the output geometry. It is tagged with the inout modifier and a Stream-Output class representing the topology, like LineStream or TriangleStream.
  • The output vertex count is defined in the [maxvertexcount()] function keyword.
  • Vertices are emitted by calling Append() on the Stream-Output object. This is equivalent to EmitVertex() in GLSL.
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#pragma geometry GeomProgram

...

[maxvertexcount(2)]
void GeomProgram(triangle VertInterpolators i[3], inout LineStream<GeomInterpolators> lineStream)
{
  GeomInterpolators o;

  float3 faceNormal = float3(0, 0, 0);
  for (int j = 0; j < 3; ++j)
  {
    faceNormal += i[j].worldNormal;
  }
  faceNormal /= 3.0;

  float3 normalCol = faceNormal * 0.5 + 0.5;

  o.clipPos = mul(UNITY_MATRIX_VP, i[0].worldPos);
  o.color = fixed4(normalCol, 1.0);
  lineStream.Append(o);

  float4 normalOffset = i[0].worldPos + float4(faceNormal, 0.0) * _NormalLength;
  o.clipPos = mul(UNITY_MATRIX_VP, normalOffset);
  o.color = fixed4(normalCol, 0.0);
  lineStream.Append(o);
}

Surface Shaders

ShaderLab can also use something called the Unity Standard pipeline, where only a surface shader is required. Surface shaders are not actual shader units, they are special intermediate functions used to set fragment properties for Unity’s standard lighting function. This lighting function happens in a complex fragment shader which is hidden from us, as we do not need to worry about the details of its implementation. Writing surface shaders allows our materials to use Unity lights, shadows, reflections without having to code them ourselves.

The surface function provides an inout SurfaceOutputStandard struct as an argument. We just need to fill in the values we want to set on this object and let the rest of the pipeline take care of computing the final color.

We can optionally add a vertex displacement function as well, which will be run before the surface function.

These parameters as well as any options are all set in a single #pragma command.

17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
CGPROGRAM

#pragma surface SurfProgram Standard fullforwardshadows vertex:VertProgram addshadow

#pragma target 3.0

struct Input
{
  float3 normalColor;
};

uniform float _Resolution;
uniform float _Displacement;

uniform half _Glossiness;
uniform half _Metallic;

void VertProgram(inout appdata_full v, out Input o)
{
  v.vertex.xyz += v.normal * sin((v.normal.x + v.normal.y + v.normal.z) * _Resolution) * _Displacement;

  o.normalColor = v.normal * 0.5 + 0.5;
}

void SurfProgram(Input i, inout SurfaceOutputStandard o)
{
  o.Albedo = i.normalColor;
  o.Metallic = _Metallic;
  o.Smoothness = _Glossiness;
}

ENDCG

Compute Shaders

Compute shaders are GPGPU programs that run outside of the rendering pipeline. They do not render anything to the screen or to any offscreen buffers. These shaders offer the most freedom but this also comes at the price of complexity. There is no framework to follow, it is up to the programmer to set up the input data, the output data, the functions to execute, and even how the workload is split among the threads and thread groups on the GPU.

In Unity, the compute shader program is represented by the ComputeShader class and the data is managed inside a ComputeBuffer container. A standard Unity script is usually required to load the shader, build and upload the input data, and run the shader.

The data is just an array of numbers, usually float, which are usually organized into structs. It is a good idea to keep the stride a multiple of 4 floats for compatibility and performance on the GPU.

 7
 8
 9
10
11
12
13
14
15
16
17
18
public struct SrcVertex
{
  public Vector3 position;
  public Vector3 normal;
  public Vector2 fill; // Fill so that struct is a multiple of 4 floats.
}

public struct DstVertex
{
  public Vector4 position;
  public Vector4 color;
}

ComputeBuffer objects must be allocated with enough memory to hold all the data. Our example will load position and normal vertex attributes from a pre-existing mesh into a source buffer, and prepare a destination buffer for position and color output attributes. The program will generate lines (2 vertices) from points (1 vertex), so the destination buffer should hold twice as much data as the source buffer.

35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
protected void Awake()
{
  // Populate source buffer.
  int strideSrc = Marshal.SizeOf(typeof(SrcVertex)) / sizeof(float);
  _numSrc = mesh.vertexCount;
  var dataSrc = new float[_numSrc * strideSrc];
  List<Vector3> positions = new List<Vector3>();
  List<Vector3> normals = new List<Vector3>(); 
  mesh.GetVertices(positions);
  mesh.GetNormals(normals);
  for (int v = 0; v < _numSrc; ++v)
  {
    dataSrc[v * strideSrc + 0] = positions[v].x;
    dataSrc[v * strideSrc + 1] = positions[v].y;
    dataSrc[v * strideSrc + 2] = positions[v].z;
    dataSrc[v * strideSrc + 3] = normals[v].x;
    dataSrc[v * strideSrc + 4] = normals[v].y;
    dataSrc[v * strideSrc + 5] = normals[v].z;
    dataSrc[v * strideSrc + 6] = 0.0f;
    dataSrc[v * strideSrc + 7] = 0.0f;
  }

  _bufferSrc = new ComputeBuffer(_numSrc, Marshal.SizeOf(typeof(SrcVertex)), ComputeBufferType.Default);
  _bufferSrc.SetData(dataSrc);

  // Populate destination buffer.
  int strideDst = Marshal.SizeOf(typeof(DstVertex)) / sizeof(float);
  // We have double the vertices because each line is made of 2 vertices.
  _numDst = _numSrc * 2;
  var dataDst = new float[_numDst * strideDst];
  for (int v = 0; v < _numDst; ++v)
  {
    dataDst[v * strideDst + 0] = 0.0f;
    dataDst[v * strideDst + 1] = 0.0f;
    dataDst[v * strideDst + 2] = 0.0f;
    dataDst[v * strideDst + 3] = 0.0f;
    dataDst[v * strideDst + 4] = 0.0f;
    dataDst[v * strideDst + 5] = 0.0f;
    dataDst[v * strideDst + 6] = 0.0f;
    dataDst[v * strideDst + 7] = 0.0f;
  }

  _bufferDst = new ComputeBuffer(_numDst, Marshal.SizeOf(typeof(DstVertex)), ComputeBufferType.Default);
  _bufferDst.SetData(dataDst);
}

The compute shader will then be dispatched every frame, overwriting the previous data.

  • A ComputeShader can hold many functions, or kernels. We need to tell the GPU which function to run by selecting the kernel with the ComputeShader.FindKernel() function.
  • A ComputeBuffer is passed into the shader as a uniform using the ComputeShader.SetBuffer() function. Note that this function also requires the kernel ID, as some uniforms like buffers and textures are tied to a kernel.
  • The shader is run by calling the ComputeShader.Dispatch() function. This function takes the number of workgroups in 3 dimensions to split the workload into. The shader code will define how many threads run in each workgroup.
81
82
83
84
85
86
87
88
89
90
91
92
93
94
public void Update()
{
  var kernelLines = computeLines.FindKernel("CSMain");
  if (kernelLines == -1)
  {
    Debug.LogError($"[{GetType().Name}] Kernel 'CSMain' not found!");
    return;
  }

  computeLines.SetBuffer(kernelLines, "_BufferSrc", _bufferSrc);
  computeLines.SetBuffer(kernelLines, "_BufferDst", _bufferDst);
  computeLines.SetFloat("_NormalLength", normalLength);
  computeLines.Dispatch(kernelLines, _numSrc, 1, 1);
}

The shader code must define corresponding structs for the input and output buffers as the ones we created on the CPU, but using Cg syntax.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
struct SrcVertex
{
  float3 position;
  float3 normal;
  float2 fill;
};

struct DstVertex
{
  float4 position;
  float4 color;
};

Uniform buffers are defined as StructuredBuffer for input data (read-only) and RWStructuredBuffer for output data (read-write).

16
17
uniform StructuredBuffer<SrcVertex>   _BufferSrc;
uniform RWStructuredBuffer<DstVertex> _BufferDst;
  • The kernel function is defined using a #pragma kernel command.
  • The number of threads per group is defined in the [numthreads(,,)] function keyword.
  • The thread dispatch ID is passed as a function argument. This value can be used to determine which index in the buffers correspond to this shader instance.
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#pragma kernel CSMain

uniform StructuredBuffer<SrcVertex>   _BufferSrc;
uniform RWStructuredBuffer<DstVertex> _BufferDst;

uniform float _NormalLength;

[numthreads(1, 1, 1)]
void CSMain(uint3 id : SV_DispatchThreadID)
{
  float3 normalPos = _BufferSrc[id.x].position + _BufferSrc[id.x].normal * _NormalLength;
  float3 normalCol = _BufferSrc[id.x].normal * 0.5 + 0.5;

  int idx = id.x * 2 + 0;
  int jdx = id.x * 2 + 1;

  _BufferDst[idx].position = float4(_BufferSrc[id.x].position, 1.0);
  _BufferDst[idx].color = float4(normalCol, 1.0);

  _BufferDst[jdx].position = float4(normalPos, 1.0);
  _BufferDst[jdx].color = float4(normalCol, 0.0);
}

Back on the CPU, we can use the output buffer from our compute shader as an input to a standard rendering shader. As we do not have any meshes to render, only data, we call Graphics.DrawProcedural() with a count value, which tells the GPU how many times to execute the vertex shader.

 96
 97
 98
 99
100
101
102
public void OnRenderObject()
{
  materialLines.SetPass(0);
  materialLines.SetMatrix("_Transform", transform.localToWorldMatrix);
  materialLines.SetBuffer("_Vertices", _bufferDst);
  Graphics.DrawProcedural(materialLines, new Bounds(transform.position, Vector3.one * 1000f), MeshTopology.Lines, _numDst);
}

We can pass the buffer into the rendering shader as a uniform, and define it with a corresponding struct in our ShaderLab code.

Since we do not have any input geometry, our vertex shader has no attributes to work with as input. We will therefore pass a variable with semantic SV_VertexID as an argument, which will just be a vertex index we can use to pull data from our buffer array.

13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
CGPROGRAM

#pragma target 5.0

#pragma vertex VertProgram
#pragma fragment FragProgram

struct DstVertex
{
  float4 position;
  float4 color;
};

struct VertInterpolators
{
  float4 position : SV_POSITION;
  float4 color    : COLOR;
};

uniform StructuredBuffer<DstVertex> _Vertices;

uniform float4x4 _Transform;

VertInterpolators VertProgram(uint id : SV_VertexID)
{
  VertInterpolators o;
  o.position = UnityObjectToClipPos(mul(_Transform, _Vertices[id].position));
  o.color = _Vertices[id].color;

  return o;
}

fixed4 FragProgram(VertInterpolators i) : SV_Target
{
  fixed4 color = i.color;
  return color;
}

ENDCG

Download the Unity project with all examples here.