Is it possible to do old school 2d blitting on modern GPU?

It may be possible on some embedded systems to get a framebuffer pointer and write to it directly, but these days you're better off using OpenGL|ES and rendering a texture. It will be more portable, and probably faster.

You could create a buffer in main memory, do all the bit twiddling you want, and then render it as a texture. You can DMA your texture data to VRAM for speed, and then render it in a quad, which is equivalent to a blit, but doesn't use any CPU cycles and runs as fast as the GPU can process.

It's amazing what you can do with shaders and programmable pipelines these days.


Check for glBlitFramebuffer routine (Framebuffer Object). You need an updated driver.

Keep in mind you can still use the default framebuffer, but I think it will be more funny using framebuffer objects.

Keep your sprite in separate frambuffers (maybe rendered using OpenGL), and set them as read (using glReadBuffers) and blit them on the draw framebuffer (using glDrawBuffers). It's quite simple and fast.


Well, you can use libSDL and get a pointer to the screen framebuffer and do whatever you want with the pixels. Or you cand do all your drawing to a memory buffer, load to a GL texture and draw textured quads which probably it's faster because of hardware acceleration.


If you're drawing textured quads, with which you can easily simulate "old school" blitting, then indeed the pixels are copied from video memory by the GPU. Also note that while bitmap operations are possible in OpenGL, they can be painfully slow because the 3D path is optimized on consumer grade video cards, whereas 2D paths may not be.