OpenGL ES - glReadPixels

In the end, it was lack of memory. The "new uint8[dataLength];" never returned an existent pointer, thus the whole process went corrupted.

TomA, your idea of clearing the buffer actually helped me to solve the problem. Thanks.


That is a driver bug. Simple as that.

The driver got the pitch of the surface in the video memory wrong. You can clearly see this in the upper lines. Also the garbage you see at the lower part of the image is the memory where the driver thinks the image is stored but there is different data there. Textures / Vertex data maybe.

And sorry, I know of no way to fix that. You may have better luck with a different surface-format or by enabling/disabling multisampling.


I don't know about android or the SDK you're using, but on IOS when I take a screenshot I have to make the buffer the size of the next POT texture, something like this:

int x = NextPot((int)screenSize.x*retina);
int y = NextPot((int)screenSize.y*retina);

void *buffer = malloc( x * y * 4 );

glReadPixels(0,0,x,y,GL_RGBA,GL_UNSIGNED_BYTE,buffer);

The function NextPot just gives me the next POT size, so if the screen size was 320x480, the x,y would be 512x512.

Maybe what your seeing is the wrap around of the buffer because it's expecting a bigger buffer size ?

Also this could be a reason for it to work in the simulator and not on the device, my graphics card doesn't have the POT size limitation and I get similar (weird looking) result.


What I assume is happening is that you are trying to use glReadPixels on the window that is covered. If the view area is covered, then the result of glReadPixels is undefined.

See How do I use glDrawPixels() and glReadPixels()? and The Pixel Ownership Problem.

As said here :

The solution is to make an offscreen buffer (FBO) and render to the FBO.

Another option is to make sure the window is not covered when you use glReadPixels.