Screen Space Reflections are used in every engine these days. Games from The Witcher 3 to Fortnite all employ this effect, so it was natural that I wanted it in Vertices. While I came across a lot of different examples and tutorials online, none seemed to be complete or clear enough to easily show the method, so I’ve decided to share my implementation here in hopes to help others with this technique.

## Initial Approach

The Initial approach to Reflections in Vertices was to use planar reflections. which essentially was to redraw the entire scene from the reflected position of the camera.

While this provided a great result with very little artifacts, it was expensive as it required re-drawing the scene for each reflected surface. Enter SSR.

# The Concept

Before we can create someting that immitates the real world, we need to breakdown how it behaves in the real world, which for reflections is quiet intuitive.

A reflection is simply the reflected light bouncing off of a surface into a camera or eye. The amount of reflection and direction is based on the roughness of the surface it’s self.

# The Algorithm

For Screen Space Reflections, we can reverse this process by performing a ray mach for each pixel which we decide is reflectable (using values from the specular map & reflection map along with a calculated fresnel value). From the Camera’s Position and Viewing Direction, we can trace where light would come from off of a reflected surfaces that the Camera sees. We can then do a ray march using the depth buffer until we intersect with a surface.

The SSR method breakdown is as follows.

## Get Initial Location

For each UV screen coordinate, we would need to find the 3D World Position (reconstructed from the Depth Map and the Camera’s InverseViewMatrix). We also need to get the Normal of what ever surface is at this position from the Normal Map.

//Get initial Depth float InitDepth = GetDepth(texCoord); // Now get the position float3 reflPosition = GetWorldPosition(texCoord, InitDepth); // Get the Normal Data float3 normalData = tex2D(NormalSampler, texCoord).xyz; //tranform normal back into [-1,1] range float3 reflNormal = 2.0f * normalData - 1.0f;

## View Direction

Since we have the Normal and 3D World Position of what ever surface is at this pixel, we need to now find the View Directions and Reflected Vector.

The View Direction is simply the pixels 3D World Space Position minus the Camera’s World Space Position and can be thought of as the vector that the reflected light moves along to the Camera.

The Reflected Vector can be thought of as the direction the light was moving before it hit the surface and was reflected into the camera. But as we’re reversing the process, this is simply the reflection of the View Direction vector.

// First, Get the View Direction float3 vDir = normalize(reflPosition - CameraPos); float3 reflectDir = normalize(reflect(vDir, normalize(reflNormal)));

## The Loop and Trace

Now that we have the physical data such as reflection vectors, world positions and surface normals, we can then do a ray trace.

Our shader uses a TraceRay function which returns a RayMarchResult struct holding whether it hit anything and if so, what are the uv coordinates of that hit.

struct RayTraceOutput { bool Hit; float2 UV; };

Our Trace Ray function will then perform a Ray march, getting the depth for each position in the loop and seeing if it’s close enough to the depth in the depth buffer.

The meat and potatoes of the Trace Ray function is the following:

// The Current Position in 3D float3 curPos = 0; // The Current UV float3 curUV = 0; // The Current Length float curLength = 1; // Now loop for (int i = 0; i < loops; i++) { // Has it hit anything yet if (output.Hit == false) { // Update the Current Position of the Ray curPos = reflPosition + reflDir * curLength ; // Get the UV Coordinates of the current Ray curUV = GetUVFromPosition(curPos); // The Depth of the Current Pixel float curDepth = GetDepth(curUV .xy); for (int i = 0; i < SAMPLE_COUNT; i++) { if (abs(curUV .z - curDepth) < DepthCheckBias) { // If it's hit something, then return the UV position output.Hit = true; output.UV = curUV .xy; break; } curDepth = GetDepth(curUV .xy + (RAND_SAMPLES[i].xy * HalfPixel * 2)); } // Get the New Position and Vector float3 newPos = GetWorldPosition(curUV.xy, curDepth ); curLength = length(reflPosition - newPos); } } return output; }

So let’s break this function down.

## March In Step

The first part we’ll look at is the outer loop. This is what steps along the ray march check if it’s hit anything yet.

// Update the Current Position of the Ray curPos = reflPosition + reflDir * curLength ; // Get the UV Coordinates of the current Ray curUV = GetUVFromPosition(curPos); // The Depth of the Current Pixel float curDepth = GetDepth(curUV .xy);

What we have here is finding the world position along this step in the ray march. We then get the screen space position of this world position. Note that in the ‘z’ value this returns the depth of this world space position, which is likely different than the curDepth value from the depth in the depth buffer at this screen space position.

It’s this different in depth which is important to us.

## Check all the things

The next part then enters a loop checking if the differences between the world position depth and the depth buffer value is below a certain DepthBiasBuffer value. If not then though, it performs a loop checking the surrounding pixels if they are

for (int i = 0; i < SAMPLE_COUNT; i++) { if (abs(curUV .z - curDepth) < DepthCheckBias) { // If it's hit something, then return the UV position output.Hit = true; output.UV = curUV.xy; break; } // If it hasn't hit anything, check the surrounding pixels curDepth = GetDepth(curUV .xy + (RAND_SAMPLES[i].xy * HalfPixel * 2)); }

So if we’ve hit something, then we can return the the output struct, but if not, then we can use a RAND_SAMPLES array we have to check surrounding pixels if there’s any collisions to help cut down on artifacts and missed positive hits.

## Creating the UV Map

Once we’ve done the Ray Tracing, we can now output the result as the UV’s and the reflection amount to the SSR UV Map. We use a UV map as it allows us to perform a more precise blurring due to surface roughness later on in a separate pass.

float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0 { // Get the Amount of reflection. Only calculate reflection on // surfaces with reflection data. float amount = tex2D(NormalSampler, texCoord - HalfPixel * 2).a; if (amount > 0) { RayTraceOutput ray = TraceRay(texCoord - HalfPixel * 2); if (ray.Hit == true) { // Fade at edges if (ray.UV.y < EdgeCutOff * 2) amount *= (ray.UV.y / EdgeCutOff / 2); return float4(ray.UV.xy, 0, amount); } } // If it didn't hit anything, then just simply return nothing return 0; }

The result here is sent to a UV map which is used by a later Post Processor. Note that we fade at the edges to deal with artifacts at these edges.

## Painting the Picture

Now that we have the UV coordinates of the Reflection positions, we can take those and apply the reflection to the scene.

# Next Steps

## Surface Roughness

We can use the Specular Power as a reference to the materials ‘roughness’ using that to apply a surface blur. The blur amount would be factored by the distance between the current screen coordinates and the reflected screen coordinates, i.e. how far away is the reflected point on the screen. We can then perform a circular blur with the size of that distance.

## Filling Missing Data

SSR works well with items which are in contact with the reflected surface, but it is only able to reflect surfaces which are visible in the screen. A long time artifact of SSR is that surfaces which are not visible will not be reflected, leaving a gap in the reflection. A good work around for this is using a Cube map which will at least add some level of reflection, but the artifact is still visible.

## Binary Test

The method I’ve shown is a quick and dirty way to get a good reflection in a small area, but this method starts to break down when there are item’s not on the surface. For a more versatile implementation for larger areas, we can add a binary test during the depth test to get a more accurate reflection value. It does add to the expense of this effect though and I’ll be posting an later on covering this.

# Final Thoughts

SSR is a great effect which adds a lot to a scene, but it can also be temperamental with artifacts and is not a be all and end all solution. That said, when used in the right environment and geometric set up, it can be a versatile addition to your engine.

As always, thanks for reading and if you like what you see, leave a comment, give our blog a follow and give Virtex a follow on Twitter and Instagram.

Thanks a lot for the post! Could you explain why we need to perform the inner loop over SAMPLE_COUNT?

Hi,

Depending on scene geometry and viewing angles, sometimes bits of the scene are missed and then you get tiny holes in your reflection result. If you check surrounding pixels then you increase the chance of getting a positive hit.

Again, this could be removed by performing a binary search, I’ll have a blog post soon detailing that.

Nice post! I’m wondering why do we need to perform the extra loop over SAMPLE_COUNT?

Reblogged this on rtroe and commented:

My 3rd entry in the weekly Shader Series is on Screen Space Reflections. It overviews the meat and potatoes of the shader along with a healthy amount of code samples.

Let me know what you think of the tutorial formats, I’d love some feedback, and give Virtex a follow to keep up with our shader series and upcoming releases.