Re: Z divide fundemental question
Posted: May 13th, 2020, 5:13 pm
Not exactly sure what you are trying to find out, but I'm going to throw something out there and see if I'm close.
I think you are asking about the motivation behind the Z depth not being linear, but I'm not clear as to what part you are referring as nonlinear. The closest I can come up with something I read quite a while ago. The projection matrix values used put the Z in a state or range that makes distinguishing objects closer to the camera at similar depths more accurate than object further away. The reason for this was because of rounding errors if I recall. Most of the objects in a field of view that are of interest are closer to the camera and more critical to the scene than objects further away. With this fact, your Z values for objects really close to the camera are going to be quite a bit different compared to other objects maybe not as close to the camera. When things are further away, the comparison between Z values starts to get smaller like .9998f and .9997f instead of .8f and .9f.
This article kind of explains Z buffer vs W buffer which is kind of what I explained here.
As to values from one pixel to the next when interpolating from one vertex to another, I too had issues trying to wrap my head around this for a while. I think chili covers this though in his 3D fundamentals series though which flipped a switch. Even if the pixel lies half way between the left and right side of a triangle for example, that doesn't account for the depth not being half way between the front and back of the triangle when viewed from an angle more parallel to the surface.
I think you are asking about the motivation behind the Z depth not being linear, but I'm not clear as to what part you are referring as nonlinear. The closest I can come up with something I read quite a while ago. The projection matrix values used put the Z in a state or range that makes distinguishing objects closer to the camera at similar depths more accurate than object further away. The reason for this was because of rounding errors if I recall. Most of the objects in a field of view that are of interest are closer to the camera and more critical to the scene than objects further away. With this fact, your Z values for objects really close to the camera are going to be quite a bit different compared to other objects maybe not as close to the camera. When things are further away, the comparison between Z values starts to get smaller like .9998f and .9997f instead of .8f and .9f.
This article kind of explains Z buffer vs W buffer which is kind of what I explained here.
As to values from one pixel to the next when interpolating from one vertex to another, I too had issues trying to wrap my head around this for a while. I think chili covers this though in his 3D fundamentals series though which flipped a switch. Even if the pixel lies half way between the left and right side of a triangle for example, that doesn't account for the depth not being half way between the front and back of the triangle when viewed from an angle more parallel to the surface.