Best way to write pixels with software renderer

The Partridge Family were neither partridges nor a family. Discuss.
Post Reply
Posts: 14
Joined: September 21st, 2019, 6:35 pm

Best way to write pixels with software renderer

Post by damoos » September 21st, 2019, 6:44 pm

I am writing a software based rendering engine (for fun/education) Clearly bottom line speed isn't my goal, but I would like it to be as fast as a can be without gpu/directx help. Using C++ and Windows 10, VS 2017. Win 32 API.

Back in the old days, I would have access to a frame buffer, and would have handled my own swapping to screen memory.

Nowadays, I'm not sure what the best way to actually implement rasterization is.

My gut is just to create a window, and use direct3D to write individual pixels.

Next notion was just to use GDI calls. If the GDI is fast enough, I am fine with this idea.

Considered an OpenGL interface, but doesn't seem suited.

Crazy idea: creating a Bitmap, treating it as a framebuffer, and blitting to the program window each frame.

Any ideas? Last time i did this, you would run your code on DOS with direct access to the hardware.

User avatar
Posts: 190
Joined: November 14th, 2014, 2:03 am

Re: Best way to write pixels with software renderer

Post by cyboryxmen » September 22nd, 2019, 12:14 pm

Computers nowadays are built such that the display is attached directly to the GPU. This means that if you want anything displayed, you will need to go through the GPU. Even GDI is merely an abstraction that hides all the GPU management code from you.

If you want reasonable performance for your software renderer, you'll want to use DirectX, OpenGL or Vulkan in order to access the GPU yourself and manage it to preform optimally according to your application's specifications. The entire process simply involves sending an image to the GPU and rendering it onto a rectangle that covers the whole screen.

Posts: 4312
Joined: February 28th, 2013, 3:23 am
Location: Oklahoma, United States

Re: Best way to write pixels with software renderer

Post by albinopapa » September 23rd, 2019, 9:18 pm

To be fair, that is exactly what the Chili framework does. The Graphics class is basically a wrapper around a buffer of integers, and the Direct3D API. When you use Graphics::PutPixel, you are updating an integral value somewhere in system memory. At the end of each frame, that system memory is copied into a texture in GPU memory. That texture is then used on a pair of triangles that for a quad ( rectangle ) filling the entire screen.

GDI+ mostly uses all CPU using SIMD instructions. It probably does the same thing as the chili framework, fill a system buffer, copy the system buffer to a back buffer on the GPU then swap back with front.

Direct2D however is a little different. It uses commands to build a scene using D3D behind the scenes. Direct2D is going to be your faster option or if you still want simple and quick and portable, use SFML or SDL2 ( SFML has a C++ interface and has support for Audio, Networking, Multi-threading as well, while SDL2 has a C interface and loading images and audio are separate libraries if I recall ).

If you are dead set on starting from scratch, the chili framework is your best place to start. Results are pretty quick to get up and running.
If you think paging some data from disk into RAM is slow, try paging it into a simian cerebrum over a pair of optical nerves. -

Post Reply