Not exactly sure what you are trying to find out, but I'm going to throw something out there and see if I'm close.
I think you are asking about the motivation behind the Z depth not being linear, but I'm not clear as to what part you are referring as nonlinear. The closest I can come up with something I read quite a while ago. The projection matrix values used put the Z in a state or range that makes distinguishing objects closer to the camera at similar depths more accurate than object further away. The reason for this was because of rounding errors if I recall. Most of the objects in a field of view that are of interest are closer to the camera and more critical to the scene than objects further away. With this fact, your Z values for objects really close to the camera are going to be quite a bit different compared to other objects maybe not as close to the camera. When things are further away, the comparison between Z values starts to get smaller like .9998f and .9997f instead of .8f and .9f.
This article kind of explains Z buffer vs W buffer which is kind of what I explained here.
As to values from one pixel to the next when interpolating from one vertex to another, I too had issues trying to wrap my head around this for a while. I think chili covers this though in his 3D fundamentals series though which flipped a switch. Even if the pixel lies half way between the left and right side of a triangle for example, that doesn't account for the depth not being half way between the front and back of the triangle when viewed from an angle more parallel to the surface.
Z divide fundemental question

 Posts: 4145
 Joined: February 28th, 2013, 3:23 am
 Location: Oklahoma, United States
Re: Z divide fundemental question
If you think paging some data from disk into RAM is slow, try paging it into a simian cerebrum over a pair of optical nerves.  gameprogrammingpatterns.com
Re: Z divide fundemental question
Hello Alaa,
Yes exactly as you mentioned,Z is nonlinear because its ideal to have more precision in the near values than far.
My noob questions are :
1. Is this by design ?
2. Or is the natural byproduct ( & happy coincidence ) of converting to NDC ( pube ) space
3. And just ro be doubly sure, does perspective projection happen "before" perspective divide
Thanks so much for helping me sort this out in my head,Alaa.
b
Yes exactly as you mentioned,Z is nonlinear because its ideal to have more precision in the near values than far.
My noob questions are :
1. Is this by design ?
2. Or is the natural byproduct ( & happy coincidence ) of converting to NDC ( pube ) space
3. And just ro be doubly sure, does perspective projection happen "before" perspective divide
Thanks so much for helping me sort this out in my head,Alaa.
b
Re: Z divide fundemental question
The short answer is that more precision being at near plane is a "natural" byproduct, but the 1/z relationship is by design. Now for the long answer...
Here's a nice image that made things very clear for me:
https://developer.nvidia.com/sites/defa ... graph1.jpg
How to interpret this graph:
In this graph d is the value stored in the depth buffer in the range [0,1] and z represents the depth value in world space. The relationship between d/z is `d = a * (1/z) + b` where a/b are some parameters based on near/far plane.
On the graph you can see on the left side evenly spaced tick marks for d and when you track them on the graph you can see where they lie on the curve. Due to the 1/z relationship, most of the tickmarks go closer to the near side than the far side.
In fact, there's a pretty nifty trick people do to get better precision near flar plane called "reversez". Due to floating point numbers being more precise the closer they are to 0, reversez flips the mapping of d so that 0 maps to the far plane and 1 maps to the near plane. This way you get higher floating point precision at the far plane than the near plane, but due to the natural tendency for more of the range to go to near plane the 2 "opposite" precisions kind of cancel out and you get a more even precision along the entire near to far region.
I got the graph from this article, which I very highly recommend you give a read: https://developer.nvidia.com/content/de ... visualized. It contains a much more indepth explanation of everything I discussed (including reversez), and also has a lot more pretty graphs.
Hope this helps.
Here's a nice image that made things very clear for me:
https://developer.nvidia.com/sites/defa ... graph1.jpg
How to interpret this graph:
In this graph d is the value stored in the depth buffer in the range [0,1] and z represents the depth value in world space. The relationship between d/z is `d = a * (1/z) + b` where a/b are some parameters based on near/far plane.
On the graph you can see on the left side evenly spaced tick marks for d and when you track them on the graph you can see where they lie on the curve. Due to the 1/z relationship, most of the tickmarks go closer to the near side than the far side.
In fact, there's a pretty nifty trick people do to get better precision near flar plane called "reversez". Due to floating point numbers being more precise the closer they are to 0, reversez flips the mapping of d so that 0 maps to the far plane and 1 maps to the near plane. This way you get higher floating point precision at the far plane than the near plane, but due to the natural tendency for more of the range to go to near plane the 2 "opposite" precisions kind of cancel out and you get a more even precision along the entire near to far region.
I got the graph from this article, which I very highly recommend you give a read: https://developer.nvidia.com/content/de ... visualized. It contains a much more indepth explanation of everything I discussed (including reversez), and also has a lot more pretty graphs.
Hope this helps.