Ever17's Studio.

【Unity-URP】快速入门屏幕空间反射(SSR)

字数统计: 4.6k阅读时长: 23 min
2021/05/30 Share

前言

关于我的第一篇博客为什么写这个以及会涉及什么:

在我任职A向TA的第一年,由于项目里使用了URP,需要学习的内容也较为多,而RenderFeature算是一个比较有意思的东西,能让使用者轻松插入你想做的渲染,所以用它做了一个后处理向的实践顺便了解一下SSR的原理。

这篇博客会在URP上搭建一个简易版本的SSR,不会涉及优化以及进阶,着重于理清SSR的原理,同时会涉及到URP的Rendererfeature写法。如果你是一位学习URP的新人,希望本文能对你有所帮助。

关于ScreenSpaceReflection(SSR)

为了表现光滑的平面的反射效果(实时,非烘焙),常见的做法有平面反射(Planar Reflection),屏幕空间反射(Screen Space Reflection),屏幕空间平面反射(Screen Space Planar Reflection)。

今年CDPR的CyberPunk2077也有这个选项如下图:

1

在潮湿的表面使用SSR可以发现效果还是相当彳亍的。那么代价是什么?RayMarching,屏幕深度,屏幕法线,而且还需要处理很多额外问题,性能怪兽。

关于RendererFeature

在unity推出URP(原LWRP)后,RenderFeature作为一个非常方便的功能,能让用户通过代码在渲染逻辑中的特定位置插入commandbuffer去自定义自己的渲染流程,可以说是相当使用友好了。我们可以通过Create/Rendering/Universal Render Pipeline/Renderer Feature的方式创建一个RenderFeature,2

其中最为关键的就是Execute函数,决定了如何渲染。以我个人的理解,ScriptableRendererFeature是ScriptableRenderPass的面板类,处理面板参数的传入,ScriptableRenderPass决定了全部的渲染逻辑。写法非常套路化,后续将在实现SSR的过程中细说。

关于SSR的原理

屏幕空间反射听着很唬人,其实理解起来还是比较清晰(最主要的是需要理解空间转换),首先,众所周知,当我们取到了屏幕深度图就能通过深度重建世界坐标,那么视角方向就有了。如果能取到屏幕法线可以非常轻易的拿到视角方向的反射方向。我们通过不断的在视角方向的反射方向上做光线步进(Ray Marching),直到光线的深度超过对应的屏幕空间深度(即射到了物体内部为止),这样这个位置就是我们需要采样的屏幕坐标。

图画的比较烂,老铁抱拳了:

5

SSR奥利给相位猛冲

Step 0:免责声明!

以下全部在前向渲染下实现,但延迟渲染能在Gbuffer免费拿到深度法线才是SSR的最好归宿,在前向渲染写仅仅是没写过延迟渲染^ ^,问就是没写过。

Step 1:拿到DepthNormalTexture

也许URP是出于性能考虑,取消了BuiltIn的DepthNormalTexuture的设计,但是!法子总是有的,alexanderameye的博客有给出重新获得这个texture的方法。Outline Shader (alexanderameye.github.io)

通过历史遗留的Hidden/Internal-DepthNormalsTexture ,通过rendererfeature渲RT到管线里。效果如下:

3

4

代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class DepthNormalsFeature : ScriptableRendererFeature
{
class DepthNormalsPass : ScriptableRenderPass
{
int kDepthBufferBits = 32;
private RenderTargetHandle depthAttachmentHandle { get; set; }
internal RenderTextureDescriptor descriptor { get; private set; }

private Material depthNormalsMaterial = null;
private FilteringSettings m_FilteringSettings;
string m_ProfilerTag = "DepthNormals Prepass";
ShaderTagId m_ShaderTagId = new ShaderTagId("DepthOnly");

public DepthNormalsPass(RenderQueueRange renderQueueRange, LayerMask layerMask, Material material)
{
m_FilteringSettings = new FilteringSettings(renderQueueRange, layerMask);
depthNormalsMaterial = material;
}

public void Setup(RenderTextureDescriptor baseDescriptor, RenderTargetHandle depthAttachmentHandle)
{
this.depthAttachmentHandle = depthAttachmentHandle;
baseDescriptor.colorFormat = RenderTextureFormat.ARGB32;
baseDescriptor.depthBufferBits = kDepthBufferBits;
descriptor = baseDescriptor;
}

// This method is called before executing the render pass.
// It can be used to configure render targets and their clear state. Also to create temporary render target textures.
// When empty this render pass will render to the active camera render target.
// You should never call CommandBuffer.SetRenderTarget. Instead call <c>ConfigureTarget</c> and <c>ConfigureClear</c>.
// The render pipeline will ensure target setup and clearing happens in an performance manner.
public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
cmd.GetTemporaryRT(depthAttachmentHandle.id, descriptor, FilterMode.Point);
ConfigureTarget(depthAttachmentHandle.Identifier());
ConfigureClear(ClearFlag.All, Color.black);
}

// Here you can implement the rendering logic.
// Use <c>ScriptableRenderContext</c> to issue drawing commands or execute command buffers
// https://docs.unity3d.com/ScriptReference/Rendering.ScriptableRenderContext.html
// You don't have to call ScriptableRenderContext.submit, the render pipeline will call it at specific points in the pipeline.
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
CommandBuffer cmd = CommandBufferPool.Get(m_ProfilerTag);

using (new ProfilingScope(cmd, new ProfilingSampler( m_ProfilerTag)))
{
context.ExecuteCommandBuffer(cmd);
cmd.Clear();

var sortFlags = renderingData.cameraData.defaultOpaqueSortFlags;
var drawSettings = CreateDrawingSettings(m_ShaderTagId, ref renderingData, sortFlags);
drawSettings.perObjectData = PerObjectData.None;


ref CameraData cameraData = ref renderingData.cameraData;
Camera camera = cameraData.camera;
if (cameraData.isStereoEnabled)
context.StartMultiEye(camera);


drawSettings.overrideMaterial = depthNormalsMaterial;


context.DrawRenderers(renderingData.cullResults, ref drawSettings,
ref m_FilteringSettings);

cmd.SetGlobalTexture("_CameraDepthNormalsTexture", depthAttachmentHandle.id);
}

context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}

/// Cleanup any allocated resources that were created during the execution of this render pass.
public override void FrameCleanup(CommandBuffer cmd)
{
if (depthAttachmentHandle != RenderTargetHandle.CameraTarget)
{
cmd.ReleaseTemporaryRT(depthAttachmentHandle.id);
depthAttachmentHandle = RenderTargetHandle.CameraTarget;
}
}
}

DepthNormalsPass depthNormalsPass;
RenderTargetHandle depthNormalsTexture;
Material depthNormalsMaterial;

public override void Create()
{
depthNormalsMaterial = CoreUtils.CreateEngineMaterial("Hidden/Internal-DepthNormalsTexture");
depthNormalsPass = new DepthNormalsPass(RenderQueueRange.opaque, -1, depthNormalsMaterial);
depthNormalsPass.renderPassEvent = RenderPassEvent.AfterRenderingPrePasses;
depthNormalsTexture.Init("_CameraDepthNormalsTexture");
}

// Here you can inject one or multiple render passes in the renderer.
// This method is called when setting up the renderer once per-camera.
public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
depthNormalsPass.Setup(renderingData.cameraData.cameraTargetDescriptor, depthNormalsTexture);
renderer.EnqueuePass(depthNormalsPass);
}
}

当然这也是一个非常好的RendererFeature学习案例,我们通过scriptablePass的构造函数、自己写的Setup函数,初始化所有用得到的变量,override congfigure方法,指定我们需要渲的RT,通过execute插入到渲染流程里。

Step 2:SSR 搞快点搞快点

后处理volume:

首先自然是得写一个volume功能,我们新建一个SSR的后处理类,继承VolumeComponent, IPostProcessComponent,作为Volume的面板,这样就能放在Volume里面作为后处理的一部分了6

写法如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System;
namespace UnityEngine.Rendering.Universal
{
[Serializable, VolumeComponentMenu("Reflection/ScreenSpaceReflection(forward)")]
public class ScreenSpaceReflection : VolumeComponent, IPostProcessComponent
{
public BoolParameter isActive=new BoolParameter(false, false);
public FloatParameter MaxStep = new FloatParameter(10,false );
public FloatParameter StepSize = new FloatParameter(1, false);
public FloatParameter MaxDistance = new FloatParameter(10, false);
public FloatParameter Thickness = new FloatParameter(1, false);

public bool IsActive()
{
return isActive.value;
}

public bool IsTileCompatible()
{
return false;
}
}


}

SSR RendererFeature:

光有了volume面板那怎么行,没实际的渲染操作那就是空壳。我们先新建一个RenderFeature(如何新建在关于RenderFeature有讲到)。

将SSR的结果会渲染到一个RT “_SSRTexture”上,并不会输出到当前屏幕的framebuffer上,所以我们会在configure函数内 通过ConfigureTarget()实现。execute函数负责设定渲染SSR的shader内的Property,通过cmd.Blit实现这个后处理(和Builtin的做法类似),最后通过cmd.SetGlobalTexture()写入到RT。

RenderFeature的一个tips:如果你是希望用rendererfeature渲染shader里的某个Pass,那就应该选择context.draw(),但是如果是搞全屏后处理这样比较麻烦(因为不需要剔除物体)得准备FilteringSetting等等参数(好处是渲染哪些东西由自己提供),但是本篇用不上干脆Blit一步到位(一开始我也写的context.draw 所以ShaderTagId写了上去后面忘记也懒得删了)。

代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
using System;
public class SSRRenderPassFeature : ScriptableRendererFeature
{




public Material ssrMtl;

class CustomRenderPass : ScriptableRenderPass
{
Material ssrMtl;
ScreenSpaceReflection ssr;
RenderTextureDescriptor descriptor;
RenderTargetHandle ssr_handle;
RenderTargetIdentifier source;

const string ssr_tag = "Screen Space Reflection Pass";
ShaderTagId shader_tag = new ShaderTagId("UniversalForward");

public void Setup(RenderTargetIdentifier source)
{
this.source = source;
}
public CustomRenderPass(Material ssrMtl,RenderTargetHandle ssr_handle)
{
this.ssr_handle = ssr_handle;
this.ssrMtl = ssrMtl;
var stack = VolumeManager.instance.stack;

ssr = stack.GetComponent<ScreenSpaceReflection>();
}
public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
descriptor = cameraTextureDescriptor;
cmd.GetTemporaryRT(ssr_handle.id, descriptor,FilterMode.Bilinear);
ConfigureTarget(ssr_handle.Identifier());
ConfigureClear(ClearFlag.All, Color.black);
}


public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
if(ssr!=null&&ssr.isActive.value)
{
CommandBuffer cmd = CommandBufferPool.Get(ssr_tag);
using(new ProfilingScope(cmd,new ProfilingSampler( ssr_tag)))
{

cmd.Blit(source, ssr_handle.Identifier(), ssrMtl);
cmd.SetGlobalTexture("_SSRTexture", ssr_handle.Identifier());
ssrMtl.SetFloat("_MaxStep", ssr.MaxStep.value);
ssrMtl.SetFloat("_StepSize", ssr.StepSize.value);
ssrMtl.SetFloat("_MaxDistance", ssr.MaxDistance.value);
ssrMtl.SetFloat("_Thickness", ssr.Thickness.value);


}
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);

}
}


public override void FrameCleanup(CommandBuffer cmd)
{
}


}

CustomRenderPass m_ScriptablePass;
RenderTargetHandle ssr_handle;

public override void Create()
{
m_ScriptablePass = new CustomRenderPass(ssrMtl, ssr_handle);

ssr_handle.Init("_SSRTexture");
m_ScriptablePass.renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
m_ScriptablePass.Setup(renderer.cameraColorTarget);

renderer.EnqueuePass(m_ScriptablePass);
}
}


SSR Shader

首先关于如何从深度重建有许多方法,比如《shader入门精要》中冯乐乐女神的做法,或者是知乎这篇Unity从深度缓冲重建世界空间位置 - 知乎 (zhihu.com)

原理大差不差,这里我选择通过NDC空间重建。原理是

1
2
half4 viewRayNDC=half4(v.uv*2-1,1,1);
float4 viewRayPS=viewRayNDC*_ProjectionParams.z;

拿到Far plane的裁剪空间坐标,转到观察空间,通过顶点到片元的自动插值获得Ray向量,在片元内通过*深度图的解码拿到0-1的线性深度获得viewSpace的viewDir同时也是位置坐标。法线图也同样好获取。当然,因为URP写hlsl没有之前写cg 的DecodeDepthNormal(),直接去unity官网下CGinclude,把解码函数拷贝过来

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
//===========================================================================
inline float3 DecodeViewNormalStereo( float4 enc4 )
{
float kScale = 1.7777;
float3 nn = enc4.xyz*float3(2*kScale,2*kScale,0) + float3(-kScale,-kScale,1);
float g = 2.0 / dot(nn.xyz,nn.xyz);
float3 n;
n.xy = g*nn.xy;
n.z = g-1;
return n;
}
inline float DecodeFloatRG( float2 enc )
{
float2 kDecodeDot = float2(1.0, 1/255.0);
return dot( enc, kDecodeDot );
}
inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
depth = DecodeFloatRG (enc.zw);
normal = DecodeViewNormalStereo (enc);
}
//===========================================================================

反射方向有了之前的铺垫就信手拈来了

1
half3 reflectDir=reflect(viewDir,normalVS); 

最后就是快乐的Raymarching了,和之前说的原理一样,当步进到的深度大于对应点屏幕空间深度时候采样对应的屏幕像素就行。(为什么需要thickness可以参考Puppet_Master大佬的那篇关于反射的博客,当反射的光从后面反射的时候直接深度就是小于当前位置深度值的,这样显然是错误的,会造成反射结果的怪异拉伸

7

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
//SSR
UNITY_LOOP
for(int i=0;i<=_MaxStep;i++)
{
float3 reflPos=posVS+reflectDir*_StepSize*i;


float4 reflPosCS=mul(unity_CameraProjection,float4(reflPos,1));
reflPosCS.xy/=reflPosCS.w;
reflUV= reflPosCS.xy*0.5+0.5;
float4 reflDepthNormal=SAMPLE_TEXTURE2D(_CameraDepthNormalsTexture,sampler_CameraDepthNormalsTexture,reflUV);
float depth=DecodeFloatRG(reflDepthNormal.zw)*_ProjectionParams.z+0.2;
float reflDepth=-reflPos.z;

if(length(reflPos-posVS)>_MaxDistance) break;

if(reflUV.x > 0.0 && reflUV.y > 0.0 && reflUV.x < 1.0 && reflUV.y < 1.0 &&depth<reflDepth&&reflDepth<depth+_Thickness)
return finCol=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,reflUV);


}

最终的SSR代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
 Shader "Hidden/SSR"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_MaxStep("MaxStep",Float)=10
_StepSize("StepSize",Float)=1
_MaxDistance("MaxDistance",Float)=10
_Thickness("Thickness",Float)=1
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100

Pass
{
Tags { "LightMode"="UniversalForward" }
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"



struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float2 uv : TEXCOORD0;

float4 vertex : SV_POSITION;
float4 rayVS:TEXCOORD1;
};

CBUFFER_START(UnityPerMaterial)

float _MaxStep;
float _StepSize;
float _MaxDistance;
float _Thickness;
CBUFFER_END


TEXTURE2D(_MainTex); SAMPLER(sampler_MainTex);
TEXTURE2D(_CameraDepthNormalsTexture); SAMPLER(sampler_CameraDepthNormalsTexture);
//===========================================================================
inline float3 DecodeViewNormalStereo( float4 enc4 )
{
float kScale = 1.7777;
float3 nn = enc4.xyz*float3(2*kScale,2*kScale,0) + float3(-kScale,-kScale,1);
float g = 2.0 / dot(nn.xyz,nn.xyz);
float3 n;
n.xy = g*nn.xy;
n.z = g-1;
return n;
}
inline float DecodeFloatRG( float2 enc )
{
float2 kDecodeDot = float2(1.0, 1/255.0);
return dot( enc, kDecodeDot );
}
inline void DecodeDepthNormal( float4 enc, out float depth, out float3 normal )
{
depth = DecodeFloatRG (enc.zw);
normal = DecodeViewNormalStereo (enc);
}
//===========================================================================



v2f vert (appdata v)
{
v2f o;
o.vertex = TransformObjectToHClip(v.vertex);
o.uv =v.uv;
#if UNITY_UV_STARTS_TOP
o.uv.y=1-o.uv.y;
#endif
half4 viewRayNDC=half4(v.uv*2-1,1,1);
float4 viewRayPS=viewRayNDC*_ProjectionParams.z;
o.rayVS=mul(unity_CameraInvProjection,viewRayPS);


return o;
}

half4 frag (v2f i) : SV_Target
{

half4 finCol=0;
float2 reflUV=0;


half4 depthNormals=SAMPLE_TEXTURE2D(_CameraDepthNormalsTexture,sampler_CameraDepthNormalsTexture,i.uv);
float Linear01depth;
float3 normalVS;
DecodeDepthNormal(depthNormals,Linear01depth,normalVS);

float3 posVS=i.rayVS.xyz*Linear01depth;



half3 viewDir=normalize(posVS);
normalVS=normalize(normalVS);
half3 reflectDir=reflect(viewDir,normalVS);
// return half4(normalVS,1);
//SSR
UNITY_LOOP
for(int i=0;i<=_MaxStep;i++)
{
float3 reflPos=posVS+reflectDir*_StepSize*i;


float4 reflPosCS=mul(unity_CameraProjection,float4(reflPos,1));
reflPosCS.xy/=reflPosCS.w;
reflUV= reflPosCS.xy*0.5+0.5;
float4 reflDepthNormal=SAMPLE_TEXTURE2D(_CameraDepthNormalsTexture,sampler_CameraDepthNormalsTexture,reflUV);
float depth=DecodeFloatRG(reflDepthNormal.zw)*_ProjectionParams.z+0.2;
float reflDepth=-reflPos.z;

if(length(reflPos-posVS)>_MaxDistance) break;

if(reflUV.x > 0.0 && reflUV.y > 0.0 && reflUV.x < 1.0 && reflUV.y < 1.0 &&depth<reflDepth&&reflDepth<depth+_Thickness)
return finCol=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,reflUV);


}


return finCol;
}
ENDHLSL
}
}
}

二分搜索优化

虽然最基础的SSR是实现了,但是显然有一个地方可以优化,那就是光线步进部分,如果一直都是一个比较小的步进距离效果是好了但是步进次数多了,那么显然性能和效果难以权衡,我们完全可以用二分法,给初始一个比较大的步进距离,等到深度大于当前深度了再步长/2返回继续搜索,直到深度差小于厚度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
//SSR
UNITY_LOOP
for(int i=0;i<=_MaxStep;i++)
{
float3 reflPos=posVS+reflectDir*_StepSize*i;


float4 reflPosCS=mul(unity_CameraProjection,float4(reflPos,1));
reflPosCS.xy/=reflPosCS.w;
reflUV= reflPosCS.xy*0.5+0.5;
float4 reflDepthNormal=SAMPLE_TEXTURE2D(_CameraDepthNormalsTexture,sampler_CameraDepthNormalsTexture,reflUV);
float depth=DecodeFloatRG(reflDepthNormal.zw)*_ProjectionParams.z+0.2;
float reflDepth=-reflPos.z;

if(length(reflPos-posVS)>_MaxDistance||reflUV.x <0.0 || reflUV.y < 0.0 || reflUV.x > 1.0 || reflUV.y > 1.0 ) break;

//if(reflUV.x > 0.0 && reflUV.y > 0.0 && reflUV.x < 1.0 && reflUV.y < 1.0 &&depth<reflDepth&&reflDepth<depth+_Thickness)
half Sign=sign(depth-reflDepth);
_StepSize*=Sign;
if(abs(depth-reflDepth)<_Thickness)
return finCol=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,reflUV);


}

Dither 以及Dual-Kawase模糊

现在问题来到了噪点上,解决噪点的方法很多,这里选择dither+模糊的方式。

dither部分我直接拿的Puppet_Master的做法,只不过他是c#端传进shader,我直接定义在shader里

tips:什么是dither?我的个人理解:显示生活中比如蚊帐,并不是透明的,其实是网状镂空结构,但是远看就有一种透明的感觉,通过模拟这种抖动实现用更低的成本还原效果。用在方向上可以获得随机的方向长度,消除噪音,再模糊进一步软化dither造成的颗粒感。

1
2
3
4
#define SSRDitherMatrix_m0 float4(0,0.5,0.125,0.625)
#define SSRDitherMatrix_m1 float4(0.75,0.25,0.875,0.375)
#define SSRDitherMatrix_m2 float4(0.187,0.687,0.0625,0.562)
#define SSRDitherMatrix_m3 float4(0.937,0.437,0.812,0.312)
1
2
float2 ditherXY=i.vertex.xy;
float4x4 SSRDitherMatrix=float4x4(SSRDitherMatrix_m0,SSRDitherMatrix_m1,SSRDitherMatrix_m2,SSRDitherMatrix_m3);
1
2
3
float2 XY=floor(fmod(ditherXY,4));
float dither=SSRDitherMatrix[XY.y][XY.x];
float3 reflPos=posVS+reflectDir*_StepSize*i+reflectDir*dither;

那么模糊为什么选择dual-kawase?因为他性能好效果也相比高斯模糊差不到哪去,主要还是没写过^ ^

tips:关于dual-kawase(参考毛神的高品质后处理:十种图像模糊算法的总结与实现_【浅墨的游戏编程Blog】毛星云(浅墨)的专栏-CSDN博客

8

具体实现可以参考毛神开源的XPLX-PostProcessing-Library/Assets/X-PostProcessing/Effects/DualKawaseBlur at master · QianMo/X-PostProcessing-Library (github.com)

代码如下,作为SSR shader的pass1 和pass2:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
Pass
{
//Pass Dual-Kawase-----DownSample
Tags { "LightMode"="UniversalForward" }
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"



struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float2 uv : TEXCOORD0;

float4 vertex : SV_POSITION;
float4 uv01: TEXCOORD1;
float4 uv23: TEXCOORD2;
float4 uv45: TEXCOORD3;
float4 uv67: TEXCOORD4;

};

CBUFFER_START(UnityPerMaterial)
float _Offset;
float2 _MainTex_TexelSize;
CBUFFER_END





TEXTURE2D(_MainTex); SAMPLER(sampler_MainTex);



v2f vert (appdata v)
{
v2f o;
o.vertex = TransformObjectToHClip(v.vertex);
o.uv =v.uv;
#if UNITY_UV_STARTS_TOP
o.uv.y=1-o.uv.y;
#endif
_Offset = float2(1 + _Offset, 1 + _Offset);
o.uv01.xy=o.uv-_MainTex_TexelSize*_Offset;
o.uv01.zw=o.uv+_MainTex_TexelSize*_Offset;
o.uv23.xy=o.uv-float2(_MainTex_TexelSize.x,-_MainTex_TexelSize.y)*_Offset;
o.uv23.xy=o.uv+float2(_MainTex_TexelSize.x,-_MainTex_TexelSize.y)*_Offset;

return o;
}

half4 frag (v2f i) : SV_Target
{

half4 sum=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,i.uv)*4;
sum+=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,i.uv01.xy);
sum+=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,i.uv01.zw);
sum+=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,i.uv23.xy);
sum+=SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex,i.uv23.zw);



return sum*0.125;
}
ENDHLSL
}

Pass
{
//Pass Dual-Kawase-----UpSample
Tags { "LightMode"="UniversalForward" }
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"



struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float2 uv : TEXCOORD0;

float4 vertex : SV_POSITION;
float4 uv01:TEXCOORD1;
float4 uv23:TEXCOORD2;
float4 uv45:TEXCOORD3;
float4 uv67:TEXCOORD4;

};

CBUFFER_START(UnityPerMaterial)
float2 _MainTex_TexelSize;
float _Offset;
CBUFFER_END





TEXTURE2D(_MainTex); SAMPLER(sampler_MainTex);



v2f vert (appdata v)
{
v2f o;
o.vertex = TransformObjectToHClip(v.vertex);
o.uv =v.uv;
#if UNITY_UV_STARTS_TOP
o.uv.y=1-o.uv.y;
#endif
_MainTex_TexelSize*=0.5;
_Offset = float2(1 + _Offset, 1 + _Offset);
o.uv01.xy=o.uv+float2(-_MainTex_TexelSize.x*2,0)*_Offset;
o.uv01.zw=o.uv+float2(-_MainTex_TexelSize.x,_MainTex_TexelSize.y)*_Offset;
o.uv23.xy = o.uv + float2(0, _MainTex_TexelSize.y * 2) * _Offset;
o.uv23.zw = o.uv + _MainTex_TexelSize * _Offset;
o.uv45.xy = o.uv + float2(_MainTex_TexelSize.x * 2, 0) * _Offset;
o.uv45.zw = o.uv + float2(_MainTex_TexelSize.x, -_MainTex_TexelSize.y) * _Offset;
o.uv67.xy = o.uv + float2(0, -_MainTex_TexelSize.y * 2) * _Offset;
o.uv67.zw = o.uv - _MainTex_TexelSize * _Offset;
return o;
}

half4 frag (v2f i) : SV_Target
{


half4 sum = 0;
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv01.xy);
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv01.zw) * 2;
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv23.xy);
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv23.zw) * 2;
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv45.xy);
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv45.zw) * 2;
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv67.xy);
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv67.zw) * 2;



return sum * 0.0833;

}
ENDHLSL
}


RenderFeature也相应调整,申请对应RT开始无尽的降采样模糊升采样模糊

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
using System;
using System.Collections;
using System.Collections.Generic;

public class SSRRenderPassFeature : ScriptableRendererFeature
{




public Material ssrMtl;

class CustomRenderPass : ScriptableRenderPass
{
Material ssrMtl;
ScreenSpaceReflection ssr;
RenderTextureDescriptor descriptor;
RenderTargetHandle ssr_handle;
RenderTargetIdentifier source;

int[] downSampleID;
int[] upSampleID;
const string ssr_tag = "Screen Space Reflection Pass";
ShaderTagId shader_tag = new ShaderTagId("UniversalForward");

public void Setup(RenderTargetIdentifier source)
{
this.source = source;
}
public CustomRenderPass(Material ssrMtl,RenderTargetHandle ssr_handle)
{
this.ssr_handle = ssr_handle;
this.ssrMtl = ssrMtl;
var stack = VolumeManager.instance.stack;

ssr = stack.GetComponent<ScreenSpaceReflection>();
}
public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
{
descriptor = cameraTextureDescriptor;
cmd.GetTemporaryRT(ssr_handle.id, descriptor,FilterMode.Bilinear);
ConfigureTarget(ssr_handle.Identifier());
ConfigureClear(ClearFlag.All, Color.black);
}


public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
if(ssr!=null&&ssr.isActive.value)
{
CommandBuffer cmd = CommandBufferPool.Get(ssr_tag);
using(new ProfilingScope(cmd,new ProfilingSampler( ssr_tag)))
{
//ssr
cmd.Blit(source, ssr_handle.Identifier(), ssrMtl,0);
cmd.SetGlobalTexture("_SSRTexture", ssr_handle.Identifier());
ssrMtl.SetFloat("_MaxStep", ssr.MaxStep.value);
ssrMtl.SetFloat("_StepSize", ssr.StepSize.value);
ssrMtl.SetFloat("_MaxDistance", ssr.MaxDistance.value);
ssrMtl.SetFloat("_Thickness", ssr.Thickness.value);
ssrMtl.SetFloat("_Offset",ssr.Radius.value);
//dual-kawase
int PixelWidth = (int)(descriptor.width / ssr.DownSample.value);
int PixelHeight = (int)(descriptor.width / ssr.DownSample.value);
downSampleID = new int[16];
upSampleID = new int[16];
for (int i = 0; i < ssr.Iteration.value; ++i)
{
downSampleID[i] = Shader.PropertyToID("_DownSample" + i);
upSampleID[i] = Shader.PropertyToID("_UpSample" + i);
}
RenderTargetIdentifier temp = ssr_handle.Identifier();
for (int i=0;i<ssr.Iteration.value;++i)
{

//RT
cmd.GetTemporaryRT(downSampleID[i], PixelWidth, PixelHeight, descriptor.depthBufferBits, FilterMode.Bilinear,RenderTextureFormat.ARGB32);
cmd.GetTemporaryRT(upSampleID[i], PixelWidth, PixelHeight, descriptor.depthBufferBits, FilterMode.Bilinear, RenderTextureFormat.ARGB32);
PixelHeight = Mathf.Max(PixelHeight / 2, 1);
PixelWidth = Mathf.Max(PixelWidth / 2, 1);
cmd.Blit(temp,downSampleID[i],ssrMtl,1);
temp = downSampleID[i];
}
for (int j = ssr.Iteration.value-2; j >=0; --j)
{
cmd.Blit(temp, upSampleID[j], ssrMtl, 2);
temp = upSampleID[j];
}
cmd.Blit(temp, ssr_handle.Identifier());
for(int k = 0; k < ssr.Iteration.value; ++k)
{
cmd.ReleaseTemporaryRT(downSampleID[k]);
cmd.ReleaseTemporaryRT(upSampleID[k]);
}



}
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);

}
}


public override void FrameCleanup(CommandBuffer cmd)
{
}


}

CustomRenderPass m_ScriptablePass;
RenderTargetHandle ssr_handle;

public override void Create()
{
m_ScriptablePass = new CustomRenderPass(ssrMtl, ssr_handle);

ssr_handle.Init("_SSRTexture");
m_ScriptablePass.renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
}

public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
m_ScriptablePass.Setup(renderer.cameraColorTarget);

renderer.EnqueuePass(m_ScriptablePass);
}
}



至此,调了调参数得到如下的反射RT

9

后面就能在shader里按你想的按屏幕坐标去采样这张RT了!本篇收官!

最后

本文只提供一个原理、做法,分享我对于SSR、rendererfeature的理解等等,仅作为学习分享、记录,正经人谁在前向渲染搞SSR。希望各位有所收获看的开心886!

原文作者:luqc

原文链接:https://ever17-luqc.github.io/SSR/

发表日期:May 30th 2021, 11:45:30 am

更新日期:July 29th 2021, 6:35:52 pm

版权声明:本文采用知识共享署名-非商业性使用 4.0 国际许可协议进行许可

CATALOG
  1. 1. 前言
  2. 2. 关于ScreenSpaceReflection(SSR)
  3. 3. 关于RendererFeature
  4. 4. 关于SSR的原理
  5. 5. SSR奥利给相位猛冲
    1. 5.1. Step 0:免责声明!
    2. 5.2. Step 1:拿到DepthNormalTexture
    3. 5.3. Step 2:SSR 搞快点搞快点
      1. 5.3.1. 后处理volume:
      2. 5.3.2. SSR RendererFeature:
      3. 5.3.3. SSR Shader
      4. 5.3.4. 二分搜索优化
      5. 5.3.5. Dither 以及Dual-Kawase模糊
  6. 6. 最后