Lets say my view frustum is 10 units wide, 10 units high, and ten units deep.

Lets not do any FOV consideration, so near plane is 1;

Lets say I have a vertex at

**(-0.5, 0.5, 2.0)**in world space.

After the projection matrix is applied, the NDC

**x,y**portion coordinate of this vertex is

**(-0.2, 0.2)**. This makes perfect sense.

The original Z component was

**2.0**. It is stored in w. My code does all this as expected.

Here is what I don't understand. If I divide my new NDC x,y coordinates by the value stored in W, I'm dividing the

**scaled**x,y coordinates by the

**un-scaled**z value now stored in W. This can't be right, can it? The product on the screen seems to reflect exactly what I would expect from this. Extreme distortion cause by dividing by a too-large Z value.

Also, the projection matrix as shown by Chili, and all my texts does do NDC scaling on the original z component, but why isn't this the one I divide by?

I know I've posted this question in various ways quite a few times, but no one seems to understand what it is I'm asking! Posting code would be pointless! My code does as it I expect it to do. Its the fundamental concept I need!. Chili's example code for the 3D fundamentals series (where he teaches the projection transform) isn't helping me, because by that point in the series, he is using his shader system, and loading complex geometry. I haven't figured out how to follow a vertex through the pipeline yet. I'm sure one can, but I'd rather just discuss the fundamental concept here.

Thanks!