Got side tracked...again
Posted: June 23rd, 2019, 7:35 pm
With all the commotion over real time ray tracing and a few of the members of the board have taken a crack at it I figured I'd take a stab at it for shits and giggles. I haven't gotten too far, just copy/pasted some Ray/Triangle intersection code and some code for calculating rays from the camera. I have thought about this a few times when a couple others shared their ray-tracer code here.
The common approach is seemingly to calculate the rays each frame. This means millions of calls to the sqrt function depending on screen resolution. The reason for this is because of the difference between how the transformations are handled in rasterization and ray-tracing. It seems that in all of the ray-tracing code I could find, you move the camera through the world as well as the near plane or screen ( so rotating the camera rotates the near plane ) and you then cast the rays in the new direction the camera is facing. This differs from the way we usually handle it in DirectX where you take the inverse transformation of the camera to make the camera the reference point or origin. Which means the camera doesn't actually move per se, you are actually moving the world toward or away from the camera, seems trippy to think of it like that.
To me, this would be far more efficient as the rays could then be precalculated because the view never changes. So that's what I have for a starting point. I mostly do the same steps as I would in D3D or in this case it's actually closer to the 3D Fundamentals framework. I use a vertex buffer to transform all the vertices using a world and view matrix. Because of how the rays work, you don't need a projection matrix.
The code I chose to copy uses counter clockwise winding for it's vertices which I am not as familiar with since the depth is -Z is forward facing like in OpenGL. I'll have to play around with it and see if I can make the necessary changes to get it working the other way round if only for my sanity.
After reading through some surface level material, it seems the next step would be to partition the scene into a grid of sorts and sort the triangles into the grid. This is where my idea kind of breaks down I suppose. If the camera is the thing that moves, you have to recalculate the rays yes, but the triangles can be presorted into this grid. If the world moves, the rays can be precalculated, but the triangles will need to be sorted each frame.
I'll just have to keep going to see what happens. One of the benefits of ray-tracing is suppose to be scaling. In rasterization, the more complex the scene or more triangles there are the slower the render whereas with ray-tracing, the more pixels you have the slower the render. The rays are limited to the number of pixels you display times the number of rays you send out per pixel. In a level or scene you could have millions of triangles so sorting them still should ends up limiting the performance based on triangle count thus losing the benefit ray-tracing would bring.
Oh well, down the rabbit hole I go.
The common approach is seemingly to calculate the rays each frame. This means millions of calls to the sqrt function depending on screen resolution. The reason for this is because of the difference between how the transformations are handled in rasterization and ray-tracing. It seems that in all of the ray-tracing code I could find, you move the camera through the world as well as the near plane or screen ( so rotating the camera rotates the near plane ) and you then cast the rays in the new direction the camera is facing. This differs from the way we usually handle it in DirectX where you take the inverse transformation of the camera to make the camera the reference point or origin. Which means the camera doesn't actually move per se, you are actually moving the world toward or away from the camera, seems trippy to think of it like that.
To me, this would be far more efficient as the rays could then be precalculated because the view never changes. So that's what I have for a starting point. I mostly do the same steps as I would in D3D or in this case it's actually closer to the 3D Fundamentals framework. I use a vertex buffer to transform all the vertices using a world and view matrix. Because of how the rays work, you don't need a projection matrix.
The code I chose to copy uses counter clockwise winding for it's vertices which I am not as familiar with since the depth is -Z is forward facing like in OpenGL. I'll have to play around with it and see if I can make the necessary changes to get it working the other way round if only for my sanity.
After reading through some surface level material, it seems the next step would be to partition the scene into a grid of sorts and sort the triangles into the grid. This is where my idea kind of breaks down I suppose. If the camera is the thing that moves, you have to recalculate the rays yes, but the triangles can be presorted into this grid. If the world moves, the rays can be precalculated, but the triangles will need to be sorted each frame.
I'll just have to keep going to see what happens. One of the benefits of ray-tracing is suppose to be scaling. In rasterization, the more complex the scene or more triangles there are the slower the render whereas with ray-tracing, the more pixels you have the slower the render. The rays are limited to the number of pixels you display times the number of rays you send out per pixel. In a level or scene you could have millions of triangles so sorting them still should ends up limiting the performance based on triangle count thus losing the benefit ray-tracing would bring.
Oh well, down the rabbit hole I go.