Help end child hunger
Nov 132012
 

OpenGL renders to framebuffers. By default OpenGL renders to screen, the default framebuffer that commonly contains a color and a depth buffer. This is great for many purposes where a pipeline consists of a single pass, a pass being a sequence of shaders. For instance a simple pass can have only a vertex and a fragment shader.

For more complex graphical effects or techniques, such as shadows or deferred rendering, multiple passes are often required, where the outputs of a pass are inputs of the following pass, for instance as textures. In this context, instead of rendering to screen, and then copying the result to a texture it would be much nicer to render to texture directly. The figure shows a two pass pipeline, where the first produces three textures that are used in the second pass to compose the final image. This is one of the advantages of framebuffer objects: we can render to multiple outputs in a single pass.

Besides, rendering to screen requires the outputs to be of a displayable format, which is not always the case in a multipass pipeline. Sometimes the textures produced by a pass need to have a floating point format which does not translate directly to colors, for instance the speed of a particle in meters per second.

In this short tutorial we will see how a framebuffer object can be created, and used with shaders. A demo is also provided with full source code, and a VS 2010 solution.

Jun 032011
 

A Siggraph 2010 course

“There are strong indications that the future of interactive graphics programming is a model more flexible than today’s OpenGL/Direct3D pipelines. As such, graphics developers need to have a basic understanding of how to combine emerging parallel programming techniques and more flexible graphics processors with the traditional interactive rendering pipeline. The first half of the course introduces attendees to modern parallel graphics architectures and parallel programming models, and describes current and near-term use of these new capabilities for real-time rendering. The second half of the course looks farther ahead at trends emerging in the academic literature and offline rendering communities as researchers use these many-core parallel architectures to explore future rendering pipelines. Topics include future, and more flexible, rendering pipelines that support true motion blur, depth-of-field, curved surfaces, and complex dynamic lighting. The course concludes with a panel, moderated by the creator of OpenGL Kurt Akeley, on the role of fixed function hardware in future graphics architectures.”

Slides available in here.

Mar 192011
 

A new noise function for GLSL is being proposed by Ian McEwan at Ashima Art. It does not require any setup, i.e. no textures nor uniform arrays. Just add it to your shader source code and call it wherever you want. This means that it is easier to distribute the final shader so that it can be used in other application. It is based on Stefan Gustavson’s paper “Simplex noise demystified” and it runs on OpenGL 1.2 and up.

Continue reading »

Mar 142011
 

Recently a number of techniques have been introduced for doing antialiasing as a post-processing step, such as MLAA and just recently SRAA. MLAA attempts to figure out the underlying geometric properties by analyzing the pixel colors in the final image. This can be complemented with depth buffer information such as in Jimenez’s MLAA. SRAA uses super-resolution buffers to figure out the geometry. This demo shows a different approach which instead of trying to figure out the geometry instead passes down the actual geometry information and uses that to very accurately smooth geometric edges.