why should we go in eye-space coordinates during fragment stage in the OpenGL pipeline?

There are several reasons eye-space is used:

  1. It's convenient. It's a well-defined space that exists, and one that you compute on the way to transforming positions anyway.
  2. It has the same scale as world space, but doesn't have the problems world space does. Eye space is always (relatively) close to zero (since the eye is at 0), so it's a reasonable space for having an explicit transform matrix for. The scale is important, because you can provide distances (like the light attenuation terms) that are computed in world space. Distances don't change in eye space.
  3. You need to transform it into a linear space anyway. Doing lighting, particularly with attentuation, in a non-linear space like post-projection spaces is... tricky. So you would have to provide normals and positions in some kind of linear space, so it may as well be eye space.
  4. It requires the fewest transforms. Eye space is the space right before the projection transform. If you have to reverse-transform to a linear space (deferred rendering, for example), eye space is the one that requires the fewest operations.

You don't have to supply the camera matrix to the shader and do the light position and direction transformation there. Actually it is rather inefficient to do it that way, since you're doing the very same operations on the same numbers again and again for each vertex.

Just transform the light position and direction CPU side and supply the readily transformed light parameters to the shader. However lighting calculations are still more concise in eye space, especially if normal mapping is involved. But you've to transform everything into eyespace anyway, as normals are not transformed by the perspective transform (though the vertex positions could be transformed into clip space directly).

Tags:

C++

Opengl

Shader