WebGL/GLSL - How does a ShaderToy work?

ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.

Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.

visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics


ShaderToy is a tool for writing pixel shaders.

What are pixel shaders?

If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.

So attributes are always the same and so is a vertex shader:

positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]

And that quad is rendered as TRIANGLE_STRIP. Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.

Vertex shader:

attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;

void main() {
    gl_Position = vec4(aPos, 0.0, 1.0);
    vUV = aUV;
}

And fragment shader would then look something like this:

uniform vec2 uScreenResolution;
varying vec2 vUV;

void main() {
    // vUV is equal to gl_FragCoord/uScreenResolution
    // do some pixel shader related work
    gl_FragColor = vec3(someColor);
}

ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.

For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short: You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help: How to do ray tracing in modern OpenGL?

Hope this helps.