Now Reading
Display screen Area Reflection | ZZNEWCLEAR13

Display screen Area Reflection | ZZNEWCLEAR13

2024-03-12 11:54:39

Display screen Area Reflection

Whereas display house reflection (SSR) is a well known impact, this text goals to introduce a novel methodology for calculating display house reflections – one which I haven’t encountered on-line earlier than.

Many on-line tutorials cowl display house reflection already, and most of them observe an analogous course of: calculate the reflection route in world house, use a largely uniform step measurement to traverse the world house, and for every step, compute the normalized machine coordinates (NDC). Then, examine the present depth with the depth worth sampled from depth texture. If the ray depth is beneath the sampled depth, contemplate it a mirrored image intersection (hit) and pattern the colour worth at that location. This methodology can yield visually pleasing reflection results, however hardly anybody mentions the staggering variety of iterations required. We’ll talk about this additional shortly.

Moreover, reaching good reflection outcomes for objects at various distances typically requires totally different step sizes, however few individuals delve into this consideration. Some barely improved approaches contain binary search after ray hitting the scene to make sure smoother transitions between reflection colours. Others might prematurely terminate steps (referred to as early return) or interpolate reflection colours and setting reflection probe based mostly on comparability between NDC coordinate and the [-1, 1] vary.

At present, the best display house ray marching methodology entails precomputing a Hierarchical ZBuffer. By entering into and out of various LODs, this method achieves the identical outcomes with fewer iterations. Nevertheless, Hierarchical ZBuffer will not be a characteristic accessible in each tasks.

Essentially the most invaluable tutorial one can discover on-line is Screen Space Ray Tracing by Morgan McGuire. He additionally wrote a paper about his algorithm. In his article, McGuire highlights why stepping in world house might be problematic. After present process perspective transformation, the step positions in world house might not range considerably in display house, resulting in the necessity for extra iterations to attain fascinating reflection results. Additionally, McGuire presents an ingenious method in his article. He calculates the coordinates of the beginning and ending factors in each clip house and display house. By linearly interpolating the z coordinate in clip house, the 1/w coordinate in clip house, and the xy coordinates in display house, he eliminates the matrix computations required throughout every step. Undoubtedly value utilizing!

The aim of this text is to get the proper reflection shade utilizing as few iterations as doable inside a single shader. Random sampling, blurring, and Fresnel impact usually are not inside the scope of this text. We are going to focus solely on Home windows platform DX11 shaders, which permits us to keep away from in depth platform-specific code. The Unity model used for this text is Unity 2022.3.21f1. The ultimate shader code will likely be supplied on the finish of the article.

Calculation of Reflections

Parameters

The calculation of reflections usually depends on three important parameters:

  1. Max Distance: This parameter considers reflections from objects inside a sure vary across the reflection level.
  2. Step Depend: Rising the variety of steps leads to extra correct reflections but in addition impacts efficiency.
  3. Thickness Params: On this article, an object’s default thickness is calculated as depth * _Thickness.y + _Thickness.x. This ensures that when a ray passes via behind an object, it isn’t thought-about an intersection.

Comparion of Depth Worth

When contemplating what sort of depth worth to match in the course of the stepping course of, a number of elements come into play. We outline the depth worth we obtained from stepping as rayDepth and the depth worth obtained from sampling as sampleDepth.

By straight sampling the depth texture, we receive the depth worth in normalized machine coordinates. Due to this fact an easy method is to match these depths in NDC. When rayDepth < sampleDepth, the ray intersects with the scene.

Alternatively, we will examine the precise depth values in view house. This method permits us to specify a thickness worth. If the depth distinction exceeds this thickness, we contemplate the ray passing via behind an object with out intersection. Particularly, when rayDepth > sampleDepth && rayDepth < sampleDepth + thickness, the ray intersects with the scene.

One factor value noting is the sampler used when sampling depth texture. Linear interpolation can mistakenly determine intersections on the edges of two faces with totally different depths, leading to undesirable artifacts (small dots) on the display. If accessible, utilizing a separate texture to mark object edges may help exclude these intersection factors. However in our shader, we’ll follow a Level Clamp sampler.

Ray Marching

Right here’s the workflow breakdown:

  1. Outline k0 and k1 because the reciprocals of the w-components of clip house coordinates for the beginning and ending factors.
  2. Outline q0 and q1 because the xyz-components of clip house coordinates for the beginning and ending factors.
  3. Outline p0 and p1 because the xy-components of normalized machine coordinates for the beginning and ending factors.
  4. Outline w as a variable that linearly will increase in (0, 1) based mostly on _StepCount.
  5. For every step, replace the worth of w and use it to linearly interpolate the three units of parts (ok, q, and p).
  6. Calculate rayDepth utilizing q.z * ok, pattern the depth texture at p to acquire sampleDepth.
  7. If rayDepth < sampleDepth, the ray intersects with the scene, exit the loop and return p.
  8. Pattern the colour texture at p to acquire the reflection shade.

It appears to be like like this (32 steps):
Screen Space Reflection Naive

Fairly poor! Essentially the most noticeable challenge is the stretching impact. There are primarily two causes for this: First, we didn’t use thickness to find out whether or not the ray passes via behind an object, leading to vital stretching beneath suspended objects. Second, we didn’t prohibit positions outdoors the display space, inflicting us to pattern depth values from coordinates past the display and return depth values at clampped positions.

Thickness Take a look at

To handle the thickness challenge talked about earlier, we introduce a way for figuring out whether or not the stepping place is behind an object. This methodology depends on the linear depths from the digicam: linearRayDepth and linearSampleDepth.

As beforehand mentioned, we use linearSampleDepth * _Thickness.y + _Thickness.x because the thickness of an object within the scene. To find out if the ray passes via behind an object, we examine (linearRayDepth - linearSampleDepth - _Thickness.x) / linearSampleDepth with _Thickness.y. If the previous is bigger than the latter, it signifies that the ray passes via behind an object.

    float getThicknessDiff(float diff, float linearSampleDepth, float2 thicknessParams)
    {
        return (diff - thicknessParams.x) / linearSampleDepth;
    }

The workflow now turns into:

  1. If rayDepth < sampleDepth and thicknessDiff < _Thickness.y, the ray intersects with the scene, exit the loop and return p.

It appears to be like like this (32 steps):
Screen Space Reflection Thickness Test

Frustum Clipping

For level p1 that falls outdoors the display house, two points come up: First, sampling the depth texture past the display vary yields incorrect depth values. Second, it reduces the efficient sampling rely. To handle this, we will prohibit p1 inside the display house. Right here’s a way for constraining the stepping endpoint inside the view frustum. We outline nf because the close to and much clipping aircraft depths (optimistic values), outline s because the slope of the view frustum within the horizontal and vertical instructions (optimistic values). Numerically, s is given by float2(side * tan(fovy * 0.5f), tan(fovy * 0.5f)). Observe that for ease of calculation, the z-components of from and to are optimistic.

#outline INFINITY 1e10

float3 frustumClip(float3 from, float3 to, float2 nf, float2 s)

Truly I’ve wrote a shadertoy to show frustum clipping in 2D, interactable with mouse:

Frustum Clip 2D

The workflow now turns into:

  1. Frustum clip the stepping endpoint to clippedPosVS, and remodel it to clip house place endCS.

It appears to be like like this (32 steps):
Screen Space Reflection Frustum Clip

See Also

The scene is beginning to exhibit some reflection, though there’s nonetheless room for enchancment. The view frustum clipping has certainly decreased the variety of pixels between steps, filling in among the gaps. Nevertheless, the mirrored colours seem distorted.

In our earlier step, though p is ensured to be on the intersected object, there’s nonetheless far from the precise intersection level. To cut back this error, we will use binary search. Right here’s the way it works: We keep two variables throughout stepping, w1 and w2, representing the w values in final two steps (w1 > w2). Throughout every binary search iteration, we calculate w = 0.5f * (w1 + w2), if an intersection is detected, replace w1 = w, in any other case, replace w2 = w and proceed to the following iteration.

The workflow now turns into:

  1. Frustum clip the stepping endpoint to clippedPosVS, and remodel it to clip house place endCS.
  2. Outline k0 and k1 because the reciprocals of the w-components of clip house coordinates for the beginning and ending factors.
  3. Outline q0 and q1 because the xyz-components of clip house coordinates for the beginning and ending factors.
  4. Outline p0 and p1 because the xy-components of normalized machine coordinates for the beginning and ending factors.
  5. Outline w1 as a variable that linearly will increase in (0, 1) based mostly on _StepCount, initialize w1 and w2 with 0.
  6. For every step, w2 = w1, replace the worth of w1 and use it to linearly interpolate the three units of parts (ok, q, and p).
  7. Calculate rayDepth utilizing q.z * ok, pattern the depth texture at p to acquire sampleDepth.
  8. If rayDepth < sampleDepth and thicknessDiff < _Thickness.y, the ray intersects with the scene, exit the loop.
  9. Let w be the typical of w1 and w2. Repeat 5, 6, and seven to examine whether or not an intersection happens till the binary search loop ends. We replace both w1 or w2 in every step relying on the end result.
  10. Pattern the colour texture at p to acquire the reflection shade.

It appears to be like like this (32 steps, 5 binary searches):
Screen Space Reflection Binary Search

The reflection impact now seems much less distorted (significantly noticeable within the backside left nook). Nevertheless, there are nonetheless gaps between shade segments as a consequence of our thickness check, which excludes potential intersections.

Potential Intersections

To compute potential intersections, let’s revisit the workflow the place we do thickness check. When the ray passes via behind an object, whether it is above the scene throughout final step, we will calculate the distinction (thicknessDiff) between rayDepth and sampleDepth. If this worth is smaller than the minimal distinction (minThicknessDiff), we contemplate it a possible intersection. We replace minThicknessDiff and report the present w1 and w2 for subsequent binary search.

Throughout binary search, if an precise intersection happens, we observe the unique code. If a possible intersection happens, we additionally want to trace thicknessDiff throughout binary search. We discover the smallest thicknessDiff lower than _Thickness.y, use present w to interpolate between p0 and p1 to acquire p, and at last use p to pattern the colour texture.

The workflow now turns into:

  1. Frustum clip the stepping endpoint to clippedPosVS, and remodel it to clip house place endCS.
  2. Outline k0 and k1 because the reciprocals of the w-components of clip house coordinates for the beginning and ending factors.
  3. Outline q0 and q1 because the xyz-components of clip house coordinates for the beginning and ending factors.
  4. Outline p0 and p1 because the xy-components of normalized machine coordinates for the beginning and ending factors.
  5. Outline w1 as a variable that linearly will increase in (0, 1) based mostly on _StepCount, initialize w1 and w2 with 0.
  6. For every step, w2 = w1, replace the worth of w1 and use it to linearly interpolate the three units of parts (ok, q, and p).
  7. Calculate rayDepth utilizing q.z * ok, pattern the depth texture at p to acquire sampleDepth.
  8. If rayDepth < sampleDepth and thicknessDiff < _Thickness.y, the ray intersects with the scene, exit the loop.
  9. In any other case, if rayZ < sampleZ, thicknessDiff > _Thickness.y, and the earlier ray was in entrance of the scene, examine thicknessDiff with the minimal worth. If smaller, replace the minimal worth, report the present w1 and w2, and mark this as a possible intersection, proceed looping.
  10. If an precise intersection happens, let w be the typical of w1 and w2. Repeat 5, 6, and seven to examine whether or not an intersection happens till the binary search loop ends. We replace both w1 or w2 in every step relying on the end result.
  11. If a possible intersection happens, repeat 5, 6, and seven to examine whether or not an intersection happens, and use the smallest thicknessDiff and w to replace p.
  12. Pattern the colour texture at p to acquire the reflection shade.

It appears to be like like this (32 steps, 5 binary searches):
Screen Space Reflection Potential Hit
And right here is the end result utilizing 64 steps and 5 binary searches:
Screen Space Reflection Potential Hit

ScreenSpaceReflection.shader

/*
// Copyright (c) 2024 zznewclear@gmail.com
// 
// Permission is hereby granted, freed from cost, to any individual acquiring a replica
// of this software program and related documentation information (the "Software program"), to deal
// within the Software program with out restriction, together with with out limitation the rights
// to make use of, copy, modify, merge, publish, distribute, sublicense, and/or promote
// copies of the Software program, and to allow individuals to whom the Software program is
// furnished to take action, topic to the next situations:
// 
// The above copyright discover and this permission discover shall be included in all
// copies or substantial parts of the Software program.
// 
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
*/

Shader "zznewclear13/SSRShader"
{
    Properties
    {
        [Toggle(USE_POTENTIAL_HIT)] _UsePotentialHit ("Use Potential Hit", Float) = 1.0
        [Toggle(USE_FRUSTUM_CLIP)] _UseFrustumClip ("Use Frustum Clip", Float) = 1.0
        [Toggle(USE_BINARY_SEARCH)] _UseBinarySearch ("Use Binary Search", Float) = 1.0
        [Toggle(USE_THICKNESS)] _UseThickness ("Use Thickness", Float) = 1.0
        
        _MaxDistance ("Max Distance", Vary(0.1, 100.0)) = 15.0
        [int] _StepCount ("Step Depend", Float) = 32
        _ThicknessParams ("Thickness Params", Vector) = (0.1, 0.02, 0.0, 0.0)
    }

    HLSLINCLUDE
    #embody "Packages/com.unity.render-pipelines.core/ShaderLibrary/Frequent.hlsl"
    #embody "Packages/com.unity.render-pipelines.common/ShaderLibrary/Core.hlsl"
    #embody "Packages/com.unity.render-pipelines.common/ShaderLibrary/Lighting.hlsl"
    
    #pragma shader_feature _ USE_POTENTIAL_HIT
    #pragma shader_feature _ USE_FRUSTUM_CLIP
    #pragma shader_feature _ USE_BINARY_SEARCH
    #pragma shader_feature _ USE_THICKNESS

    #outline INFINITY 1e10
    #outline DEPTH_SAMPLER sampler_PointClamp

    Texture2D _CameraOpaqueTexture;
    Texture2D _CameraDepthTexture;
    CBUFFER_START(UnityPerMaterial)
    float _MaxDistance;
    int _StepCount;
    float2 _ThicknessParams;
    CBUFFER_END

    struct Attributes
    {
        float4 positionOS   : POSITION;
        float3 normalOS     : NORMAL;
        float2 texcoord     : TEXCOORD0;
    };

    struct Varyings
    {
        float4 positionCS   : SV_POSITION;
        float3 positionWS   : TEXCOORD0;
        float3 normalWS     : TEXCOORD1;
        float2 uv           : TEXCOORD2;
        float3 viewWS       : TEXCOORD3;
    };

    Varyings vert(Attributes enter)
    {
        Varyings output = (Varyings)0;
        VertexPositionInputs vpi = GetVertexPositionInputs(enter.positionOS.xyz);
        VertexNormalInputs vni = GetVertexNormalInputs(enter.normalOS);

        output.positionCS = vpi.positionCS;
        output.positionWS = vpi.positionWS;
        output.normalWS = vni.normalWS;
        output.uv = enter.texcoord;
        output.viewWS = GetCameraPositionWS() - vpi.positionWS;
        return output;
    }

    float3 frustumClip(float3 from, float3 to, float2 nf, float2 s)
    

    float getThicknessDiff(float diff, float linearSampleDepth, float2 thicknessParams)
    {
        return (diff - thicknessParams.x) / linearSampleDepth;
    }

    float4 frag(Varyings enter) : SV_TARGET
    {
        float3 positionWS = enter.positionWS;
        float3 normalWS = normalize(enter.normalWS);
        float3 viewWS = normalize(enter.viewWS);
        float3 reflWS = mirror(-viewWS, normalWS);
        float3 env = GlossyEnvironmentReflection(reflWS, 0.0f, 1.0f);
        float3 shade = env;

        float3 originWS = positionWS;
        float3 endWS = positionWS + reflWS * _MaxDistance;

#if outlined(USE_FRUSTUM_CLIP)
        float3 originVS = mul(UNITY_MATRIX_V, float4(originWS, 1.0f)).xyz;
        float3 endVS = mul(UNITY_MATRIX_V, float4(endWS, 1.0f)).xyz;
        float3 flipZ = float3(1.0f, 1.0f, -1.0f);
        float3 clippedVS = frustumClip(originVS * flipZ, endVS * flipZ, _ProjectionParams.yz, float2(1.0f, -1.0f) / UNITY_MATRIX_P._m00_m11);
        clippedVS *= flipZ;
        float4 originCS = mul(UNITY_MATRIX_VP, float4(originWS, 1.0f));
        float4 endCS = mul(UNITY_MATRIX_P, float4(clippedVS, 1.0f));
#else
        float4 originCS = mul(UNITY_MATRIX_VP, float4(originWS, 1.0f));
        float4 endCS = mul(UNITY_MATRIX_VP, float4(endWS, 1.0f));
#endif

        float k0 = 1.0f / originCS.w;
        float k1 = 1.0f / endCS.w;
        float3 q0 = originCS.xyz;
        float3 q1 = endCS.xyz;
        float2 p0 = originCS.xy * float2(1.0f, -1.0f) * k0 * 0.5f + 0.5f;
        float2 p1 = endCS.xy * float2(1.0f, -1.0f) * k1 * 0.5f + 0.5f;

#if outlined(USE_POTENTIAL_HIT)
        float w1 = 0.0f;
        float w2 = 0.0f;
        bool hit = false;
        bool lastHit = false;
        bool potentialHit = false;
        float2 potentialW12 = float2(0.0f, 0.0f);
        float minPotentialHitPos = INFINITY;
        [unroll(64)]
        for (int i=0; i<_StepCount; ++i)
        {
            w2 = w1;
            w1 += 1.0f / float(_StepCount);

            float3 q = lerp(q0, q1, w1);
            float2 p = lerp(p0, p1, w1);
            float ok = lerp(k0, k1, w1);
            float sampleDepth = _CameraDepthTexture.Pattern(DEPTH_SAMPLER, p).r;
            float linearSampleDepth = LinearEyeDepth(sampleDepth, _ZBufferParams);
            float linearRayDepth = LinearEyeDepth(q.z * ok, _ZBufferParams);

            float hitDiff = linearRayDepth - linearSampleDepth;
            float thicknessDiff = getThicknessDiff(hitDiff, linearSampleDepth, _ThicknessParams);
            if (hitDiff > 0.0f)
            {
                if (thicknessDiff < _ThicknessParams.y)
                {
                    hit = true;
                    break;
                }
                else if(!lastHit)
                {
                    potentialHit = true;
                    if (minPotentialHitPos > thicknessDiff)
                    {
                        minPotentialHitPos = thicknessDiff;
                        potentialW12 = float2(w1, w2);
                    }
                }
            }
            lastHit = hitDiff > 0.0f;
        }
#else
        float w1 = 0.0f;
        float w2 = 0.0f;
        bool hit = false;
        [unroll(64)]
        for (int i=0; i<_StepCount; ++i)
        {
            w2 = w1;
            w1 += 1.0f / float(_StepCount);

            float3 q = lerp(q0, q1, w1);
            float2 p = lerp(p0, p1, w1);
            float ok = lerp(k0, k1, w1);
            float sampleDepth = _CameraDepthTexture.Pattern(DEPTH_SAMPLER, p).r;
#if outlined(USE_THICKNESS)
            float linearSampleDepth = LinearEyeDepth(sampleDepth, _ZBufferParams);
            float linearRayDepth = LinearEyeDepth(q.z * ok, _ZBufferParams);

            float hitDiff = linearRayDepth - linearSampleDepth;
            float thicknessDiff = getThicknessDiff(hitDiff, linearSampleDepth, _ThicknessParams);
            if (hitDiff > 0.0f && thicknessDiff < _ThicknessParams.y)
            {
                hit = true;
                break;
            }       
#else
            if (q.z * ok < sampleDepth)
            {
                hit = true;
                break;
            }
#endif
        }
#endif

#if outlined(USE_POTENTIAL_HIT)
        if (hit || potentialHit)
        {
            if (!hit)
            {
                w1 = potentialW12.x;
                w2 = potentialW12.y;
            }

            bool realHit = false;
            float2 hitPos;
            float minThicknessDiff = _ThicknessParams.y;
            [unroll(5)]
            for (int i=0; i<5; ++i)
            {
                float w = 0.5f * (w1 + w2);
                float3 q = lerp(q0, q1, w1);
                float2 p = lerp(p0, p1, w1);
                float ok = lerp(k0, k1, w1);
                float sampleDepth = _CameraDepthTexture.Pattern(DEPTH_SAMPLER, p).r;
                float linearSampleDepth = LinearEyeDepth(sampleDepth, _ZBufferParams);
                float linearRayDepth = LinearEyeDepth(q.z * ok, _ZBufferParams);
                float hitDiff = linearRayDepth - linearSampleDepth;

                if (hitDiff > 0.0f)
                {
                    w1 = w;
                    if (hit) hitPos = p;
                }
                else
                {
                    w2 = w;
                }

                float thicknessDiff = getThicknessDiff(hitDiff, linearSampleDepth, _ThicknessParams);
                float absThicknessDiff = abs(thicknessDiff);
                if (!hit && absThicknessDiff < minThicknessDiff) 
                {
                    realHit = true;
                    minThicknessDiff = thicknessDiff;
                    hitPos = p;
                }
            }

            if (hit || realHit) shade = _CameraOpaqueTexture.Pattern(sampler_LinearClamp, hitPos).rgb * 0.3f;
        }
#elif outlined(USE_BINARY_SEARCH)
        if (hit)
        {
            float2 hitPos;
            [unroll(5)]
            for (int i=0; i<5; ++i)
            {
                float w = 0.5f * (w1 + w2);
                float3 q = lerp(q0, q1, w1);
                float2 p = lerp(p0, p1, w1);
                float ok = lerp(k0, k1, w1);

                float sampleDepth = _CameraDepthTexture.Pattern(DEPTH_SAMPLER, p).r;
                if (q.z * ok < sampleDepth)
                {
                    w1 = w;
                    hitPos = p;
                }
                else
                {
                    w2 = w;
                }
            }
            shade = _CameraOpaqueTexture.Pattern(sampler_LinearClamp, hitPos).rgb * 0.3f;
        }
#else
        if (hit)
        {
            float2 hitPos = lerp(p0, p1, w1);
            shade = _CameraOpaqueTexture.Pattern(sampler_LinearClamp, hitPos).rgb * 0.3f;
        }
#endif

        return float4(shade, 1.0f);
    }

    ENDHLSL

    SubShader
    {
        Tags { "RenderType"="Clear" "Queue"="Clear" }
        LOD 100

        Move
        {
            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            ENDHLSL
        }
    }
}

Future Optimization

At present, there’s one side value optimizing: controlling the general step rely based mostly on the pixel distance between p0 and p1. We definitely don’t wish to step 64 occasions for simply 10 pixels. Nevertheless, this can be a comparatively easy process, and I’ll go away it to somebody with time to spare. As for random sampling, blurring, and Fresnel results, let’s contemplate these after we really want them.

Postscript

This text was translated by Microsoft’s Copilot and I made just a few changes. What an period we stay in!

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top