Camera can generate a depth, normal + depth and motion vector texture.
The depth texture generated by camera can be used for post processing to generate some very interesting effects.
So, here it is clear that camera generates depth texture every frame and every pixel you see on the screen every frame has a depth value.
Pixel values in the depth texture range between 0 and 1, with a non-linear distribution. I often found this statement very confusing.
What does that even mean?
Traditionally value 1 is far plane and 0 is near plane, but unity uses reversed Z, so the values are reversed. Since these values are non-linear , value = 0.5 doesn't mean it is a mid way between near and far plane, but actually quite close to the camera. This makes it hard to read those non-linear depth values. We can use certain macros which help us with these, but lets look into those later.
One more thing to understand here is, When using Deferred shading or legacy Deferred lighting rendering paths, depth textures comes "for free", since they produce G-buffer rendering anyways, however when using Forward rendering, we need to set the camera's depth texture mode to DepthTextureMode.DepthNormals.
Simply put this in Awake or start and attach this script to mainCamera.
GetComponent<Camera>().depthTextureMode = DepthTextureMode.Depth;
now lets come back to shader code.
1. First question is how can we get depth values in our shader code?
Ans. By Simply sampling the _CameraDepthTexture values.
Depth textures are available for sampling in shaders as global shader properties. By declaring a sampler called _CameraDepthTexture you will be able to sample the main depth texture from the camera.
.
.
.
.
sampler2D _CameraDepthTexture;
float _DepthTesting;
fixed4 frag (v2f i) : SV_Target
{
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
.
.
.
.
.
The sampled depth values as mentioned before are non linear.
Now the second question?
2. How can we make them a linear value range?
Ans. Converting the depth floating point from non- linear to linear is done by 2 most common function - Linear01Depth(depth) and LinearEyeDepth(depth).
So Linear01Depth(depth) converts the sampled depth value to linear 0.0 to 1.0 range, where 0.5 is actually the half way between far and near plane.
fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
depth = Linear01Depth(depth);
.
.
.
LinearEyeDepth(depth) takes the sampled depth value and converts it into world scaled view space depth. Ex: 100 means 100 units away from camera.
now to convert this Linear01Depth with actual camera projection z distance, we multiply the Linear01Depth(depth) with the camera z distance.
fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
depth = Linear01Depth(depth);
depth = depth * _ProjectionParams.z;
.
.
.
.
To demonstrate the above mentioned theory, i wrote a small code. I exposed _DepthTesting parameter to inspector, and all the pixels which has the depth value greater than _DepthTexting are given the inverted colors.
fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
depth = Linear01Depth(depth);
depth = depth * _ProjectionParams.z;
if(depth < _DepthTesting)
return col;
else
return (1 - col);
}
Commenti