Add a function in Graphics for handling resizing the width and height of our back buffers when the window size changes.
This is the part that I've been avoiding, handling the resizing of the back buffer and all related resources.
First, let's call our function OnResize, the naming is kind of an idiom for event handlers.
Code: Select all
class Graphics
{
public:
void OnResize( int width, int height );
};
Code: Select all
void Graphics::OnResize( int width, int height )
{
if( !( ( width == ScreenWidth ) && ( height == ScreenHeight ) ) )
{
pImmediateContext->OMSetRenderTargets( 0, nullptr, nullptr );
pRenderTargetView = Microsoft::WRL::ComPtr<ID3D11RenderTargetView>{};
pSysBufferTextureView = Microsoft::WRL::ComPtr<ID3D11ShaderResourceView>{};
pSysBufferTexture = Microsoft::WRL::ComPtr<ID3D11Texture2D>{};
if( auto hr = pSwapChain->ResizeBuffers(
0u,
0u,
0u,
DXGI_FORMAT_UNKNOWN,
0
); FAILED( hr ) )
{
throw std::system_error( hr, std::system_category(), "Failed to resize buffers" );
}
ScreenWidth = width;
ScreenHeight = height;
// get handle to backbuffer
ComPtr<ID3D11Resource> pBackBuffer;
if( auto hr = pSwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&pBackBuffer );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Getting back buffer" );
// create a view on backbuffer that we can render to
if( auto hr = pDevice->CreateRenderTargetView( pBackBuffer.Get(), nullptr, &pRenderTargetView );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Creating render target view on backbuffer" );
// set backbuffer as the render target using created view
pImmediateContext->OMSetRenderTargets( 1, pRenderTargetView.GetAddressOf(), nullptr );
auto* mem = _aligned_malloc( width * height * sizeof( Color ), 16 );
Color* temp = new( mem ) Color{};
_aligned_free( pSysBuffer );
pSysBuffer = temp;
// create texture for cpu render target
D3D11_TEXTURE2D_DESC sysTexDesc;
sysTexDesc.Width = ScreenWidth;
sysTexDesc.Height = ScreenHeight;
sysTexDesc.MipLevels = 1;
sysTexDesc.ArraySize = 1;
sysTexDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
sysTexDesc.SampleDesc.Count = 1;
sysTexDesc.SampleDesc.Quality = 0;
sysTexDesc.Usage = D3D11_USAGE_DYNAMIC;
sysTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
sysTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
sysTexDesc.MiscFlags = 0;
// create the texture
if( auto hr = pDevice->CreateTexture2D( &sysTexDesc, nullptr, &pSysBufferTexture );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Creating sysbuffer texture" );
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc = {};
srvDesc.Format = sysTexDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = 1;
// create the resource view on the texture
if( auto hr = pDevice->CreateShaderResourceView(
pSysBufferTexture.Get(),
&srvDesc,
&pSysBufferTextureView );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Creating view on sysBuffer texture" );
}
}
There's a lot to go over here so I'll try to break it all down.
Code: Select all
if( ( width == ScreenWidth ) && ( height == ScreenHeight ) )return;
Since we will need to free and reallocate memory, we probably don't want to do that if the size hasn't changed, though this function should only ever be called when the size does change it doesn't hurt to err on the safe side.
Code: Select all
pImmediateContext->OMSetRenderTargets( 0, nullptr, nullptr );
pRenderTargetView = Microsoft::WRL::ComPtr<ID3D11RenderTargetView>{};
pSysBufferTextureView = Microsoft::WRL::ComPtr<ID3D11ShaderResourceView>{};
pSysBufferTexture = Microsoft::WRL::ComPtr<ID3D11Texture2D>{};
According the MSDN,
IDXGISwapChain::ResizeBuffers requires all render targets associated with the back buffer including their render target views need to be unreferenced and unbound. Since the chili framework has only one render target and one associated render target view, we can just assign an uninitialized copy of the each which should release the interfaces and set the reference counts to 0. We also need to unbind the render target view from the pipeline by passing nullptr to OMSetRenderTargets.
Code: Select all
if( auto hr = pSwapChain->ResizeBuffers(
0u,
0u,
0u,
DXGI_FORMAT_UNKNOWN,
0
); FAILED( hr ) )
{
throw std::system_error( hr, std::system_category(), "Failed to resize buffers" );
}
According to the documentation, passing all 0's to this function will reuse the DXGI_FORMAT from the old buffer and get the new size from the client area of the window associated with the IDXGISwapChain object. From my testing, this function can fail if there are still references to the associated render target and view, but this code seems to work for me so far.
Code: Select all
ScreenWidth = width;
ScreenHeight = height;
// get handle to backbuffer
ComPtr<ID3D11Resource> pBackBuffer;
if( auto hr = pSwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&pBackBuffer );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Getting back buffer" );
// create a view on backbuffer that we can render to
if( auto hr = pDevice->CreateRenderTargetView( pBackBuffer.Get(), nullptr, &pRenderTargetView );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Creating render target view on backbuffer" );
// set backbuffer as the render target using created view
pImmediateContext->OMSetRenderTargets( 1, pRenderTargetView.GetAddressOf(), nullptr );
The width and height passed to us is the width and height of the drawable area or client area of the window, so we can assign those the the ScreenWidth and ScreenHeight directly.
Since we released the render target ( back buffer ) and view, they need to be recreated.
Code: Select all
if( auto* mem = _aligned_malloc( width * height * sizeof( Color ), 16 ); mem != nullptr )
{
Color* temp = new( mem ) Color{};
_aligned_free( pSysBuffer );
pSysBuffer = temp;
}
else
{
throw std::bad_alloc();
}
This code may not be familiar to some if not most. Just recently I learned that it is undefined behavior to cast the void* pointer returned from allocating functions. You must initialize the memory with placement new. To use placement new, you have to #include <new>. Also, you'll notice that I allocate to a temp pointer. This is done before freeing the pSysBuffer in case the allocation fails, the previous buffer is still usable. Once sure the allocation was successful, we can deallocate the current buffer and assign the temp buffer pointer to the old buffer pointer.
This should have also been done with the back buffer and render target stuff, but I haven't implemented that part as I'm not sure if I would need to create a second swap chain to get a new back buffer or if doing it this way is enough as I don't believe the back buffer is affected if the function fails.
Code: Select all
// create texture for cpu render target
D3D11_TEXTURE2D_DESC sysTexDesc;
sysTexDesc.Width = ScreenWidth;
sysTexDesc.Height = ScreenHeight;
sysTexDesc.MipLevels = 1;
sysTexDesc.ArraySize = 1;
sysTexDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
sysTexDesc.SampleDesc.Count = 1;
sysTexDesc.SampleDesc.Quality = 0;
sysTexDesc.Usage = D3D11_USAGE_DYNAMIC;
sysTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
sysTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
sysTexDesc.MiscFlags = 0;
// create the texture
if( auto hr = pDevice->CreateTexture2D( &sysTexDesc, nullptr, &pSysBufferTexture );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Creating sysbuffer texture" );
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc = {};
srvDesc.Format = sysTexDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = 1;
// create the resource view on the texture
if( auto hr = pDevice->CreateShaderResourceView(
pSysBufferTexture.Get(),
&srvDesc,
&pSysBufferTextureView );
FAILED( hr ) )
throw GFX_EXCEPTION( hr, L"Creating view on sysBuffer texture" );
The last part is just recreating the resources for the buffer texture and shader resource view used to copy the pSysBuffer to during Graphics::EndFrame().
If you think paging some data from disk into RAM is slow, try paging it into a simian cerebrum over a pair of optical nerves. - gameprogrammingpatterns.com