Adaptive Frameless Rendering

Authors:Woolley, Cliff, Department of Computer ScienceUniversity of Virginia Dayal, Abhinav, Department of Computer ScienceUniversity of Virginia Watson, Ben, Department of Computer ScienceUniversity of Virginia Luebke, Dave, Department of Computer ScienceUniversity of Virginia

We propose an adaptive form of frameless rendering with the potential to dramatically increase rendering speed over conventional interactive rendering approaches. Without the rigid sampling patterns of framed renderers, sampling and reconstruction can adapt with very fine granularity to spatio-temporal color change. A sampler uses closed-loop feedback to guide sampling toward edges or motion in the image. Temporally deep buffers store all the samples created over a short time interval for use in reconstruction and as sampler feedback. GPU-based reconstruction responds both to sampling density and space-time color gradients. Where the displayed scene is static, spatial color change dominates and older samples are given significant weight in reconstruction, resulting in sharper and eventually antialiased images. Where the scene is dynamic, more recent samples are emphasized, resulting in less sharp but more up-to-date images. We also use sample reprojection to improve reconstruction and guide sampling toward occlusion edges, undersampled regions, and specular highlights. In simulation our frameless renderer requires an order of magnitude fewer samples than traditional rendering of similar visual quality (as measured by RMS error), while introducing overhead amounting to 15% of computation time.

All rights reserved (no additional license for public reuse)
Source Citation:

Woolley, Cliff, Abhinav Dayal, Ben Watson, and Dave Luebke. "Adaptive Frameless Rendering." University of Virginia Dept. of Computer Science Tech Report (2005).

University of Virginia, Department of Computer Science
Published Date: