Movement Blur All of the Manner Down


“Torusphere Accelerator”, the animation that motivated this text.
What occurs for those who take movement blur previous its logical excessive? Listed below are some enjoyable observations and concepts I encountered whereas attempting to reply this query, with an try to use the ends in a procedural animation.
What’s movement blur presupposed to appear to be?
Movement blur began out purely as a movie artifact, the results of a topic shifting whereas the digital camera’s shutter is open. This artifact turned out to be fascinating, particularly for movies, as a result of it improves the perceptual similarity between a video and a pure scene, one thing I am going to dive into on this part.
In a 3D and animation context, it is fascinating to notice that these two objectives – trying pure, and simulating a digital camera, may not have the same opinion, and would possibly lead to totally different movement blurs. I am going to preserve the simulation side as a facet word, and ask what probably the most pure doable movement blur ought to appear to be. This may be damaged down into just a few questions:
- How will we understand a pure shifting scene?
- How will we understand this scene reproduced in a video?
- What’s the perceptual distinction between these two circumstances?
- How can video movement blur decrease this distinction?
Notion of movement in a pure scene
For the aim of crafting movement blur, we will begin by analyzing the very first steps of human imaginative and prescient, the place the sunshine hits our retina and phototransduction takes place. Below well-lit situations, that is dealt with by cone-type cells. Phototransduction just isn’t quick, and we will mannequin this lag by smoothing out the sunshine stimulus over time.
Combining the above weighting perform’s form with recognized human cone response instances, we will create an in depth simulation of a perceived picture primarily based on any enter scene. This idea has been used before, however with out the form of the time response.

2. Movement Smearing. Left: Instance scene, assumed to be pure and steady. Proper: simulated perceived picture. This assumes the viewer is taking a look at a hard and fast level, and never monitoring the thing with their eyes.
What this reveals is that there already exists a pure blur on the photoreceptor stage, a phenomenon typically referred to as movement smear. So why will we add synthetic movement blur in movies, and what’s the hyperlink between movement smear and movement blur?
Notion of a scene on a display
Let’s examine what this perceived picture appears to be like like when viewing a display with restricted frames per second.

3. Notion of a video. Left: Instance scene as it could seem on a display. Proper: perceived picture.
This lastly provides us a means of visualizing the usefulness of movement blur. When viewing a video with out movement blur, the ensuing perceived picture appears to be like like overlaid frames as a substitute of the anticipated movement smear. This example is improved with a movement blurred video, the place every body not reveals a second in time, however a median of all of the moments throughout the time interval lined by this body. That is analogous to a video made with a digital camera whose shutter is open all through every body’s timespan. The ensuing perceived picture appears to be like much more just like the pure case.
Making the display pure with a shutter perform
One thing nonetheless appears to be like off within the perceived picture for conventional movement blur. At some object speeds, artifacts nonetheless seem as discontinuities within the movement smear. This may be nearly eradicated by making use of a shutter perform: as a substitute of averaging all of the moments inside a body, we weight them by a perform in order that the beginning and finish moments have much less weight than the central second of a body. The title “shutter perform” comes from the analogy to shutter effectivity in a diaphragm camera, the place the shutter takes time to transition between open and closed states. However as a substitute of simulating cameras, the shutter perform will be chosen in a means that minimizes the perceptual distinction between the display and a pure scene. The issue then turns into similar to crafting window functions in sign processing, and certainly the most well-liked window capabilities give superb outcomes.
4. Making use of a shutter perform.
How nicely does this work? You may get a tough thought within the following demo, which will be switched between movement blur with and with out a shutter perform. My impression is that the shutter perform just isn’t obligatory at low speeds, however is noticeably extra pure for fast-moving objects. This makes it extremely related to the “previous the logical excessive” experiment I am aiming for. It additionally appears to be like smoother in nonetheless frames, which is incidental however generally related.
5. Stay comparability of movement blur with and with out a shutter perform.
I am going to emphasize that this perceptual method to movement blur just isn’t standard and might be misguided ultimately. The frequent method is to simulate cameras, which leads to zero time overlap between frames, and infrequently totally discards moments that fall between frames. In the meantime, the strategy I am describing ends in overlapping time-ranges for successive frames. With that out of the way in which, let’s strive making use of this system.
Getting irrational with the torusphere
To make issues each tough and fascinating, I made a decision to make this infinite movement blur animation as a realtime shader. As a result of I like hardship and distress, sure, however principally as a result of I would like the top product to be interactive, and on this case, a shader may be the best means.
First, how does one render movement blur in realtime? After ruling out multisampling and analytic ray-traced movement blur, I settled on a horrible hack finest described as “built-in quantity movement blur”. Signify the shifting object as a perform that takes coordinates (together with time) and returns density (the within is 1, the remainder is 0). Combine this density perform over time, and the consequence ought to offer you a “motion-blurred density” over any time interval. The consequence will be rendered by volume ray casting. This methodology just isn’t photorealistic, however can deal with extraordinarily lengthy trails with realtime efficiency.
The meant animation combines an orbiting sphere and a rotating torus, each of which should be motion-blurred as much as basically infinite velocity.
Movement-blurred sphere
Taking a 2D slice of the orbiting sphere, the issue is diminished to discovering the motion-blurred density for an orbiting circle. Let’s assume an orbital radius $R$, and a circle of radius $a$. The circle’s middle is all the time at a distance of $R$ from the origin, so it may begin on the level $(R, 0)$. Because of this initially, all factors $(x, y)$ on the circle are outlined by:
$$
(x – R)^2 + y^2 = a^2
$$
To be able to work with the orbit, this ought to be expressed in polar coordinates $(r,theta)$, which will be carried out by substitution:
$$
r^2 – 2 r R costheta + R^2 = a^2
$$
Discovering the density perform means taking any level, and answering the query: When does this level enter the orbiting circle? When does it exit? The reply lies within the angle coordinate coordinate $theta$ of the preliminary object’s floor, with the identical radial coordinate $r$ because the given level. As a result of the thing is orbiting, this angle is immediately associated to the time when the thing will hit the purpose. So let’s discover $theta$ primarily based on the above definition of the floor:
$$
theta = pmarccosfrac{R^2 + r^2 – a^2}{2 r R}, thetain[-pi,pi]
$$
The $pm$ signal comes from the inversion of $cos$. This $pm$ is helpful, because it determines which half-circle is outlined: constructive or destructive $theta$. The 2 halves will be mixed to get a polar expression of the density $rho$ of the corresponding disk:
$$
rho(r,theta) =
start{circumstances}
1 & textual content{if }-h(r)ltthetalt h(r) cr
0 & textual content{in any other case}
finish{circumstances}[2ex]
textual content{the place} h(r) = arccosfrac{R^2 + r^2 – a^2}{2 r R}
$$
From this beginning place, the disk is orbiting across the origin. That is equal to eradicating the time $t$ instances the velocity $v$ from the angle coordinate:
$$
rho(colorbox{yellow}{t,}r,theta) =
start{circumstances}
1 & textual content{if }-h(r)ltthetacolorbox{yellow}{- v t}lt h(r) cr
0 & textual content{in any other case}
finish{circumstances}
$$
We are able to separate $t$ from the time interval $I$ throughout which the thing is current at some extent $(r,theta)$:
$$
rho(t, r,theta) =
start{circumstances}
1 & textual content{if }tin I cr
0 & textual content{in any other case}
finish{circumstances}[2ex]
I=left[cfrac{theta-h(r)}{v}, cfrac{theta+h(r)}{v}right]
$$
The motion-blurred density is the integral of the density $rho$ over the present body’s time interval $F$. This works out to be the size of the intersection between $I$ and $F$. This may also be described intuitively: we’re measuring how a lot of the body’s time is occupied by the thing at a given level in area.
$$int_Frho(t,r,theta) d t = int_{Fcap I}1 d t = |Fcap I|$$
Let’s apply a shutter perform $s$. For simplicity, assume $s$ is already centered on the present body’s time span. We are able to apply it by multiplying the density with $s(t)$ earlier than integrating, changing the necessity for any bounds of integration of the density. If $s$ has an antiderivative $S$, then the motion-blurred density turns into:
$$
intrho(t,r,theta) s(t) d t =
int_I s(t) d t =
S(max I)-S(min I)
$$
This may be carried out in a shader and works with any shutter perform, nonetheless, primarily based on the objectives from the primary a part of this text, shutter capabilities ought to have an integral of 1 and will overlap in such a means that the sum of all shutter capabilities at any timepoint is all the time 1. This may be glad with a trapezoid perform, or with a sinusoid perform similar to this one, used within the animation:
$$
s(t)=start{circumstances}
cfrac{1-cosfrac{(t-A)2pi}{B-A}}{B-A} & textual content{if }Alt tlt B cr
0 & textual content{in any other case}
finish{circumstances}[2ex]
A=min F-fracF{2},B=max F+fracF{2}
$$
Movement-blurred torus
The identical course of will be adopted for the torus. A 2D vertical slice of a torus known as a spiric section, or Spiric of Perseus. Apart from sounding like an epic videogame weapon, it additionally has a handy formulation in polar coordinates. Take a torus of minor radius $a$ and main radius $b$. Take a bit at place $c$, and inside this part in polar coordinates $(r,theta)$, all torus factors are outlined by:
$$
(r^2-a^2+b^2+c^2)^2 = 4b^2(r^2cos^2theta+c^2)
$$
Fixing for $theta$, assuming $thetain[-pi/2,pi/2]$, this turns into:
$$
theta = pmarccosfrac{sqrt{(a^2 – b^2 – c^2 – r^2 – 2 b c) (a^2 – b^2 – c^2 – r^2 + 2 b c)}}{2 b r}
$$
As soon as once more, the within of the torus is enclosed between the constructive and destructive circumstances of the $pm$ signal, giving us a polar expression of the density of the stable torus. The remaining steps to get the motion-blurred rotating torus are precisely the identical as for the sphere above.
Movement-blurred spiric part.
*/
precision mediump float;
#outline PI 3.1415926535897932384626433832795
// Inputs
various vec2 iUV;
uniform float iTime;
uniform vec2 iRes;
// These traces are parsed by dspnote to generate sliders
uniform int video_motion_blur; //dspnote param: conventional | with_sine_shutter
uniform float minor_radius; //dspnote param: 0.01 – 0.5, 0.2
uniform float major_radius; //dspnote param: 0.01 – 0.5, 0.3
uniform float slice_position; //dspnote param: -1 – 1, 0.1
uniform float rotation_speed; //dspnote param: 0 – 300, 10 (rad/s)
const float time = 0.;
// torus minor and main radius, squared and mixed
vec2 tor, tor2;
float torCst;
// the cosine shutter perform is:
// (1-cos((x-t1) 2 PI / (t2-t1)))/(t2-t1) if t1
float d = 1./(t2 – t1);
x -= t1;
return x*d – sin(2.*PI*x*d)/(2.*PI);
}
// Inside the slice at place z, in polar coordinates at radius r,
// discover the angle of the torus floor.
// Returns 0 if r is totally outdoors the torus.
// Returns -1 if r is totally contained in the torus.
float spiricPolarSurface(float r, float z) {
float r2 = r*r;
float z2 = z*z;
float sum = torCst-2.*tor2.x*z2-2.*tor2.x*r2-2.*tor2.y*z2+2.*tor2.y*r2+z2*z2+2.*z2*r2+r2*r2;
if (sum < 0.) return -1.;
float sq = sqrt(sum)/(2.*tor.y*r);
if (abs(sq) > 1.) return 0.;
return acos(sq);
}
float tradMotionBlur(float obj1, float obj2) {
// Shutter time interval. Ought to embody the frameStart, but it surely’s
// moved to the pixel coordinates for simpler wrap administration.
float shut1 = -.5/60.;
float shut2 = .5/60.;
// the field shutter perform is (shut1

6. Movement-blurred spiric part.
Placing it collectively
All that is left is to “draw the remainder of the owl” by combining components in a convincing means, and by utilizing customary quantity ray casting on the consequence. Surface normals want further care as a result of there isn’t any such factor as “motion-blurred floor normals”, so that they’re simply blended collectively right here.
The animation ought to run beneath with fundamental mouse/contact interplay. It may not work nicely on all gadgets, so there’s additionally a pre-rendered video on the high of the web page. You may also discover this shader on Shadertoy.
/*
Infinite velocity movement blur utilizing quantity ray casting.
Weblog put up to go along with it: https://www.osar.fr/notes/motionblur
Additionally on Shadertoy: https://www.shadertoy.com/view/cdXSRn
*/
#extension GL_OES_standard_derivatives : allow
precision mediump float;
#outline PI 3.14159265359
// Inputs
various vec2 iUV;
uniform float iTime;
uniform vec2 iRes;
// These traces are parsed by dspnote for interplay
uniform float cam_x; //dspnote param
uniform float cam_y; //dspnote param
// fundamental materials, gentle and digital camera settings
#outline DIFFUSE .9
#outline SPEC .9
#outline REFLECT .05
const vec3 lightDir = normalize(vec3(-5, -6, -1));
#outline CAM_D 2.4
#outline CAM_H .75
// marching iterations
#outline ITER 40
#outline SHADOW_ITER 20
// marching step, which is dependent upon the dimensions of the bounding sphere
float stepSz;
// torus form ratio = minor radius / main radius
#outline TOR_RATIO .38
// velocity for: time remapping; ball transition into orbit; object rotation
#outline TIMESCALE .015
#outline RAD_SPEED 100.
const float RT_RAD_SPEED = sqrt(RAD_SPEED);
const float MAX_SPEED = ground(30./(TIMESCALE*PI*2.)+.5)*PI*2.;
// remapped time for big scale occasions
float T;
// cycle length in remapped time
// it is dependent upon the torus ratio as a result of the radiuses zoom into one another
const float C = log((1. + TOR_RATIO) / TOR_RATIO);
const float D = C * .5;
// ball and torus velocity, rotation and transformation matrix
float balSpeed, balRot, torSpeed, torRot;
mat2 balMat, torMat;
// ball and torus dimension and cycle development
float balSz, torSz, balCycle, torCycle;
// ball and torus movement blur amplification
float balAmp, torAmp;
// torus minor and main radius, with squared model
vec2 tor, tor2;
// constants for torus angle and ball normals
float torCst, balCst;
// density and normity x-fades, ball orbit radius, beauty changes
float densXf, normXf, balOrbit, torNormSz, strobe;
// by Dave_Hoskins: https://www.shadertoy.com/view/4djSRW
float hash14(vec4 p4) {
p4 = fract(p4 * vec4(.1031, .1030, .0973, .1099));
p4 += dot(p4, p4.wzxy+33.33);
return fract((p4.x + p4.y) * (p4.z + p4.w));
}
// by iq: https://iquilezles.org/articles/filterableprocedurals/
float filteredGrid(vec2 p, float scale, vec2 dpdx, vec2 dpdy) {
float iscale = 1./scale;
float N = 60.0*scale;
p *= iscale;
vec2 w = max(abs(dpdx), abs(dpdy))*iscale;
vec2 a = p + 0.5*w;
vec2 b = p – 0.5*w;
vec2 i = (ground(a)+min(fract(a)*N,1.0)-
ground(b)-min(fract(b)*N,1.0))/(N*w);
return (1.0-i.x*.6)*(1.0-i.y*.6);
}
// by iq: https://iquilezles.org/articles/smin/
float smin(float a, float b, float ok) {
float h = max(k-abs(a-b), 0.0)/ok;
return min(a, b) – h*h*ok*(1.0/4.0);
}
mat2 rot2d(float a) {
float c = cos(a);
float s = sin(a);
return mat2(c, -s, s, c);
}
// 2-point sphere intersection
vec2 sphIntersect2(vec3 ro, vec3 rd, vec4 sph) {
vec3 oc = ro – sph.xyz;
float b = dot(oc, rd);
float c = dot(oc, oc) – sph.w*sph.w;
float h = b*b – c;
if(h<0.0) return vec2(-1.0, -1.0);
h = sqrt(h);
return vec2(-b-h, -b+h);
}
// antiderivative of the cosine shutter perform which is:
// (1-cos((x-t1) 2 PI / (t2-t1)))/(t2-t1) if t1
float d = 1./(t2 – t1);
x -= t1;
return x*d – sin(2.*PI*x*d)/(2.*PI);
}
// movement blurred density = integral of { object presence * window perform }
float cosMotionBlur(float obj1, float obj2) {
// Shutter time interval. Ought to embody the frameStart, but it surely’s
// moved to the pixel coordinates for simpler wrap administration.
float shut1 = -1./60.;
float shut2 = 1./60.;
// integral of the shutter perform from obj1 to obj2
return iCosShutter(obj2, shut1, shut2) – iCosShutter(obj1, shut1, shut2);
}
// Take a slice at depth y. In polar coordinates, at radius r,
// discover the polar angle of the ball floor.
// Returns 0 if r is totally outdoors the ball.
// Returns -1 if r is totally contained in the ball.
float ballPolarSurface(float r, float y) {
float rad = balSz*balSz – y*y;
if (rad <= 0.) return 0.;
rad = sqrt(rad);
if (r <= rad-balOrbit) return -1.;
float div = (balOrbit*balOrbit+r*r-rad*rad)/(2.*r*balOrbit);
if (abs(div) > 1.) return 0.;
return acos(div);
}
// motion-blurred ball density
float ballDensity(vec3 p, float velocity) {
p.xz *= balMat;
p.z = abs(p.z);
vec2 pol = vec2(size(p.xz), atan(p.z, p.x));
float bA = ballPolarSurface(pol.x, p.y);
if (bA == -1.) return 1.;
// Time interval for the thing presence at this pixel.
float obj1 = (pol.y-bA)/velocity;
float obj2 = (pol.y+bA)/velocity;
return cosMotionBlur(obj1, obj2);
}
// ball “normity”, pseudo distance discipline to calculate normals
float ballNormity(vec3 p) {
p.xz *= balMat;
p.z = abs(p.z);
vec2 pol = vec2(size(p.xz), atan(p.z, p.x));
pol.y = max(0., pol.y-balCst);
p.x = pol.x*cos(pol.y);
p.z = pol.x*sin(pol.y);
return size(p-vec3(balOrbit, 0., 0.))-balSz;
}
// Take a slice at depth z. In polar coordinates, at radius r,
// discover the polar angle of the torus floor.
// Returns 0 if r is totally outdoors the torus.
// Returns -1 if r is totally contained in the torus.
float spiricPolarSurface(float r, float z) {
float r2 = r*r;
float z2 = z*z;
float sum = torCst-2.*tor2.x*z2-2.*tor2.x*r2-2.*tor2.y*z2+2.*tor2.y*r2+z2*z2+2.*z2*r2+r2*r2;
if (sum < 0.) return -1.;
float sq = sqrt(sum)/(2.*tor.y*r);
if (abs(sq) > 1.) return 0.;
return acos(sq);
}
// motion-blurred density of a half torus (a macaroni)
float halfTorusDensity(vec2 pol, float z, float velocity) {
float da = spiricPolarSurface(pol.x, z);
if (da == 0.) return 0.;
if (da == -1.) return 1.;
// Time interval for the thing presence at this pixel.
float obj1 = (pol.y-da)/velocity;
float obj2 = (pol.y+da)/velocity;
return cosMotionBlur(obj1, obj2);
}
// motion-blurred torus density
float torusDensity(vec3 p3d, float velocity) da2 == -1.) return 1.;
return min(1., da+da2);
// torus “normity”, pseudo distance discipline to calculate normals
float torusNormity(vec3 p, float velocity) {
p.xy *= torMat;
float shell = abs(size(p)-tor.y)-tor.x*.3;
vec2 q = vec2(size(p.xz)-tor.y,p.y);
float torus = size(q)-tor.x;
return -smin(velocity*.002-torus, .1-shell, 0.1);
}
// mixed density and normity
float density(vec3 p) {
float ball = ballDensity(p, balSpeed)*balAmp;
float torus = torusDensity(p, torSpeed)*torAmp;
return combine(ball, torus, densXf);
}
float normity(vec3 p) {
return combine(
ballNormity(p),
torusNormity(p*torNormSz, torSpeed*.5),
normXf);
}
vec3 getNormal(vec3 p) {
float d = normity(p);
vec2 e = vec2(.001, 0);
vec3 n = d – vec3(
normity(p-e.xyy),
normity(p-e.yxy),
normity(p-e.yyx));
return normalize(n);
}
// As a result of we’re raycasting translucent stuff, that is referred to as as much as 28x per px
// so let’s preserve it quick
vec3 materials(vec3 regular, vec3 rayDir) {
float diff = max(dot(regular, -lightDir), .05);
vec3 reflectDir = -lightDir – 2.*regular * dot(-lightDir, regular);
float spec = max(dot(rayDir, reflectDir), 0.);
return vec3(.8,.9,1.) * (diff * DIFFUSE + spec * REFLECT);
}
// render torusphere by quantity raycasting
vec4 march(vec3 ro, vec3 rd, float marchPos, float marchBack) {
float totMul = strobe*stepSz/0.05;
vec4 col = vec4(0.);
marchPos -= stepSz * hash14(vec4(rd*4000., iTime*100.));
int nMats = 0;
for(int i=0; i
d = d*d*.5;
float a2 = (1.-col.a)*d;
vec3 n = getNormal(pos);
col += vec4(materials(n, rd)*a2, a2);
if (col.a > 0.95) break;
if (nMats++ > 28) break;
}
marchPos += stepSz;
if (marchPos > marchBack) break;
}
if (col.a > 0.) col.rgb /= col.a;
return col;
}
// render floor shadow by quantity raycasting with out materials
float shadowMarch(vec3 ro, vec3 rd, float marchPos, float marchBack) {
float ret = 0.;
float shadowStep = stepSz*2.;
float totMul = .47*strobe*shadowStep/0.05;
marchPos -= shadowStep * hash14(vec4(ro*4000., iTime*100.));
for(int i=0; i
d = d*d*.9;
ret += (1.-ret)*d;
if (ret > 0.95) break;
}
marchPos += shadowStep;
if (marchPos > marchBack) break;
}
return min(1., ret);
}
// very inefficiently velocity up the boring components
float retime(float t) {
t *= TIMESCALE;
float s = .5+1.7*t*PI*2./D;
s = sin(s+sin(s+sin(s+sin(s)*0.3)*0.5)*0.75);
return s*.06+t*1.7;
}
// ball<->torus crossfade used individually by density and normity
float getXf(float x) {
x = (abs(mod(x-(D/4.), C)-D)/D-.5)*2.+.5;
// return smoothstep(0, 1, x)
x = 2.*clamp(x, 0., 1.)-1.;
return .5+x/(x*x+1.);
}
// Your complete scene is essentially zooming out. The bottom texture offers with
// that by crossfading totally different scales.
const float GRID_CYCLE = log(64.);
vec3 grid(vec2 pt, vec2 dx, vec2 dy, float section, float t) {
float freq = exp(-mod(t+GRID_CYCLE*section, GRID_CYCLE))*7.;
float amp = cos(PI*2.*section+t*PI*2./GRID_CYCLE)*-.5+.5;
float g = filteredGrid(pt, freq, dx, dy)*amp;
return vec3(g,g,g);
}
void mainImage(inout vec4 fragColor, in vec2 fragCoord) {
// set all of the globals…
T = retime(iTime+25.); // think about a modulo right here
balCycle = mod(T, C);
torCycle = mod(T+D, C);
// dimension of the bounding sphere for marching and step dimension
float boundSz = exp(-min(torCycle, 5.*(C-mod(T-D, C))));
stepSz = boundSz/20.;
// the ball/torus seem fixed dimension and the digital camera seems to zoom out
// within the code the digital camera distance is mounted and the objects are shrinking
balSz = exp(-balCycle-D);
torSz = exp(-torCycle);
// the rotation is (theoretically) the integral of the velocity, we’d like each
balSpeed = .04*MAX_SPEED*(cos(T*PI*2./C)+1.);
torSpeed = .04*MAX_SPEED*(cos((T+D)*PI*2./C)+1.);
balRot = MAX_SPEED*(sin(T*PI*2./C)/(PI*2./C)+T)/C;
torRot = MAX_SPEED*(sin((T+D)*PI*2./C)/(PI*2./C)+T)/C;
if (balCycle
// this smoothens the max velocity -> zero velocity phantasm
densXf = getXf(T);
normXf = getXf(T+0.06);
// movement blur amplification is what makes this work
balAmp = 1.+balSpeed*balSpeed*.00013;
torAmp = 1.5+torSpeed*torSpeed*.00015;
torNormSz = max(1., 8.*(torCycle-.76));
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iRes.xy;
// the strobe impact simulates overlap between quick spin and gradual spin
strobe = 1.-.1*(sin(iTime*83.+PI*smoothstep(.4, .6, uv.x))+1.)*(sin(2.2+T*PI*2./D)+1.)*.5;
// digital camera
float facet = cos(CAM_H+cam_y)*CAM_D;
float camT = iTime*0.05+PI*.75+cam_x;
vec3 ro = vec3(sin(camT)*facet, sin(CAM_H+cam_y)*CAM_D, cos(camT)*facet); // digital camera place (ray origin)
vec3 ta = vec3(0., 0., 0.); // digital camera goal
vec3 ww = normalize(ta – ro);
vec3 uu = normalize(cross(ww,vec3(0.0,1.0,0.0)));
vec3 vv = normalize(cross(uu,ww));
vec2 p = (-iRes.xy + 2.0*fragCoord)/iRes.y;
vec3 rd = normalize(p.x*uu + p.y*vv + 2.*ww);
// this beginning colour taints your entire scene (unintentional however why not)
vec3 col = vec3(0.33, 0.18, 0.1);
// the bottom aircraft
if (rd.y < 0.) {
vec3 groundPt = ro + rd*(-(ro.y+.8) / rd.y);
vec2 g2d = groundPt.xz;
vec2 dx = dFdx(g2d);
vec2 dy = dFdy(g2d);
// the bottom texture zooms out by crossfading totally different scales
col += grid(g2d, dx, dy, 0., T)/3.;
col += grid(g2d, dx, dy, 1./3., T)/3.;
col += grid(g2d, dx, dy, 2./3., T)/3.;
float sqDist = dot(g2d, g2d);
col *= 2./(sqDist*.5*1.5+1.)-1.2/(sqDist*1.5*1.5+1.);
// are we within the shadow of the bounding sphere?
vec2 sphInter = sphIntersect2(groundPt, -lightDir, vec4(0.,0.,0.,boundSz));
if (sphInter != vec2(-1., -1.)) {
// march the torusphere to attract the shadow
float shad = shadowMarch(groundPt, -lightDir, sphInter.x, sphInter.y);
col *= 1.-shad*.7;
}
}
// the sky (solely seen in interactive model)
float up = dot(rd, vec3(0.,1.,0.));
col = combine(col, vec3(0.33, 0.18, 0.1)*.7, 1.-smoothstep(0., .02, abs(up)+.003));
col = combine(col, vec3(0.,0.,.1), smoothstep(0., .5, up));
// lastly render the torusphere
vec2 sphInter = sphIntersect2(ro, rd, vec4(0.,0.,0.,boundSz));
if (sphInter != vec2(-1., -1.)) {
vec4 ts = march(ro, rd, sphInter.x, sphInter.y);
col = combine(col, ts.rgb, ts.a);
}
fragColor = vec4(col, 1.);
}
void major(void)
{
vec4 colour = vec4(0.0, 0.0, 0.0, 1.0);
mainImage(colour, gl_FragCoord.xy);
gl_FragColor = colour;
}
tgt.model.touchAction = ‘none’;
let startPointerX = -1, startCamX = 0;
let startPointerY, startCamY = 0;
tgt.onpointerdown = (ev) => {
fig.activate();
tgt.setPointerCapture(ev.pointerId);
startPointerX = ev.clientX;
startPointerY = ev.clientY;
startCamX = fig.params[‘cam_x’].worth;
startCamY = fig.params[‘cam_y’].worth;
};
tgt.onpointerup = (ev) => {
tgt.releasePointerCapture(ev.pointerId);
startPointerX = -1;
};
tgt.onpointercancel = tgt.onpointerup;
tgt.onpointermove = (ev) => {
if (startPointerX === -1) return;
fig.params[‘cam_x’].worth = startCamX – 4*(ev.clientX – startPointerX)/tgt.clientWidth;
fig.params[‘cam_y’].worth = startCamY + 4*(ev.clientY – startPointerY)/tgt.clientHeight;
if (fig.params[‘cam_y’].worth > 0.75) fig.params[‘cam_y’].worth = 0.75;
if (fig.params[‘cam_y’].worth < -1) fig.params[‘cam_y’].worth = -1;
fig.soiled = true;
};

Torusphere accelerator (dwell)