Introduction
calculateDirectIllumination()
and calculateIndirectIllumination()
in shading.frag
from tutorial 4 to the place-holder implementation in this tutorial (the shader is called shading.frag
also in this tutorial). The result should look like below:

- Render from the security cameras point of view to a framebuffer.
- Use the color texture from the framebuffer as emissive texture.
FboInfo
, but it is not completed yet. In the constructor, we have generated two textures, one for color and one for depth, but we have not yet bound them together to a framebuffer.
Setting up Framebuffer objects
// >>> @task 1
glGenFramebuffers(1, &framebufferId);
glBindFramebuffer(GL_FRAMEBUFFER, framebufferId);
Then attach the color texture as color attachment 0 (there may be many color attachments per framebuffer):
// bind the texture as color attachment 0 (to the currently bound framebuffer)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorTextureTarget, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
And attach the depth texture as the depth attachment (there can only be one per framebuffer):
// bind the texture as depth attachment (to the currently bound framebuffer)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthBuffer, 0);
Now we have an easy way to create framebuffers. In the end of initGL()
, there is a section devoted to creation of framebuffers. We will create five framebuffers and push them to the vector fboList
, enough framebuffers for the mandatory and optional assignments in this tutorial.
int w, h;
SDL_GetWindowSize(g_window, &w, &h);
const int numFbos = 5;
for (int i = 0; i < numFbos; i++) {
fboList.push_back(FboInfo(w, h));
}
We have now intialized framebuffers with the initial size of the window. The window may however be changed by the user, and we therefore reallocate the textures when the resolution changes. This is already performed in the beginning of display()
. Have a look at how this is implemented.
Rendering to the FBO
fboList
. Bind the framebuffer at @task 2
with:
// >>> @task 2
FboInfo &securityFB = fboList[0];
glBindFramebuffer(GL_FRAMEBUFFER, securityFB.framebufferId);
And directly after you have bound the framebuffer, set the viewport and clear it:
glViewport(0, 0, securityFB.width, securityFB.height);
glClearColor(0.2, 0.2, 0.8, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Now, render the scene to this framebuffer, just as we do from the camera, but skip the rendering of the security camera obj-model (since we are rendering from within it). Use the view and projection matrix that already is provided: securityCamViewMatrix
and securityCamProjectionMatrix
.
Now we are to use the color attachment as a texture (emissiveMap
in shading.frag
) when we render the landing pad from the users view below. We will just change the texture id used in the landing pad model.
labhelper::Material &screen = landingpadModel->m_materials[8];
screen.m_emission_texture.gl_id = securityFB.colorTextureTarget;
The result should look like below. You can change the security cameras direction by pressing the right mouse button and moving the mouse. Try it out!

Rendering the FBO fullscreen
fboList
, and then render normally.
If you run the application now you will see a black screen, since nothing has been rendered to the default framebuffer.
To see the scene rendered on the screen again:
- Bind the default framebuffer and clear it.
- Set
postFxShader
as the active shader program. - Bind the framebuffer texture to texture unit 0.
- Draw a quad that covers the viewport to start a fragment shader for each pixel on the screen (see below).
labhelper::drawFullScreenQuad();
Post processing
postFxShader
, the shader we used in the previous task to draw the framebuffer to the fullscreen. To change the effect used and pass some needed parameters to these effects we are going to need to set some uniforms for the shader. Add the following code after postFxShader
has been set as active and before drawing the fullscreen quad:
labhelper::setUniformSlow(postFxShader, "time", currentTime);
labhelper::setUniformSlow(postFxShader, "currentEffect", currentEffect);
labhelper::setUniformSlow(postFxShader, "filterSize", filterSizes[filterSize - 1]);
The currently used effect can be controlled with the uniform currentEffect
which is set from the gui. You can toggle gui visibility by pressing G (you can also comment out the if
surrounding the call to gui()
in main()
). With the Sepia post processing, that mimics a toning technique of black-and-white photography, the result looks like below.

vec4 textureRect(in sampler2D tex, vec2 rectangleCoord)
{
return texture(tex, rectangleCoord / textureSize(tex, 0));
}
This allows us to sample the texture with pixel coordinates of the glsl built in variable gl_FragCoord as texture coordinates, which supplies the screen space coordinates (within the render target) of the fragment being shaded. Normally, textures are sampled with coordinates in the range [0, 1].
The functions are used from the main function in the shader, try out different ones, and combine them. Notice the effect which is a variation that chains all effects (except grayscale).
vec2 mushrooms(vec2 inCoord);
Perturbs the sampling coordinates of the pixel and returns the new coordinates. These can then be used to sample the frame buffer. The effect uses a sine wave to make us feel woozy. Can you make it worse?
vec3 blur(vec2 coord);
Uses a primitive box filter to blur the image. This method is low quality and expensive; test using a large filter and note the FPS counter in the debug overlay. For real time purposes a
separable blur is preferable, which requires several passes. We will explain this process in the (optional) Section Efficient Blur and Bloom below.
vec3 grayscale(vec3 sample);
The grayscale() function simply returns the luminance (perceived brightness) of the input sample color.
vec3 toSepiaTone(vec3 rgbSample);
The toSepiaTone() function converts the color sample to sepia tone (by transformation to the YIQ color space), to make it look somewhat like an old photo.
Experiment with the different effects, for example change the colorization in the sepia tone effect, can you make it red? Also try combining them. Try to understand how each one produces its result.
Post processing - Mosaic

When done, show your result to one of the assistants. Have the finished program running and be prepared to explain what you have done.
[Optional] Efficient Blur

shaders/horizontal_blur.frag
and shaders/vertical_blur.frag
. Load these together with the vertex shadershaders/postFx.vert
, and store the references in variables named horizontalBlurShader
and verticalBlurShader
.To render the blur, use this algorithm:
- Render a full-screen quad into an fbo (here called
horizontalBlurFbo
).- Use the shader horizontalBlurShader.
- Bind the postProcessFbo.colorTextureTarget as input frame texture.
- Render a full-screen quad into an fbo (here called
verticalBlurFbo
).- Use the shader verticalBlurShader.
- Bind the horizontalBlurFbo.colorTextureTarget as input frame texture.

[Optional] 6: Bloom
shaders/cutoff.frag
. Load the shader, use the fifth created FBO (here called cutoffFbo
), and draw a full-screen pass into it. When visualized it should look like this:



When done, show your result to one of the assistants. Ask them more about post processing!