About Me


This is the research homepage of Erik Sintorn. I am a postdoc working in the graphics group at Chalmers University Of technology in Gothenburg Sweden, with supervisor Ulf Assarsson. My main research topics are real-time shadows, transparency and participating media, but my interests include global illumination, GPGPU algorithms and almost anything else related to computer graphics.

Department of Computer Science and Engineering
Chalmers University of Technology
S-412 96 Gothenburg, SWEDEN
Visiting address:
Rännvägen 6, Room 4118, 4th floor (EDIT-building)
Phone:+46 704 914191

Publications News
More Efficient Virtual Shadow Maps for Many Lights (Journal Article)
Ola Olsson, Markus Billeter, Erik Sintorn, Viktor Kämpe and Ulf Assarsson
IEEE Transactions on Visualization and Computer Graphics, 21 (6), 2015

Recently, several algorithms have been introduced that enable real-time performance for many lights in applications such as games. In this paper, we explore the use of hardwaresupported virtual cube-map shadows to efficiently implement high-quality shadows from hundreds of light sources in real time and within a bounded memory footprint. In addition, we explore the utility of ray tracing for shadows from many lights and present a hybrid algorithm combining ray tracing with cube maps to exploit their respective strengths. Our solution supports real-time performance with hundreds of lights in fully dynamic high-detail scenes.

preprint pdf | bibtex |
Fast, Memory-Efficient Construction of Voxelized Shadows (Inproceeding)
Viktor Kämpe, Erik Sintorn and Ulf Assarsson
I3D 2015

We present a fast and memory efficient algorithm for generating Compact Precomputed Voxelized Shadows. By performing much of the common sub-tree merging before identical nodes are ever created, we improve construction times by several orders of magnitude for large data structures, and require much less working memory. We also propose a new set of rules for resolving undefined regions, which significantly reduces the final memory footprint of the already heavily compressed data structure. Additionally, we examine the feasibility of using CPVS for many local lights and present two improvements to the original algorithm that allow us to handle hundreds of lights with high-quality, filtered shadows at real-time frame rates.

preprint pdf | bibtex |
Compact Precomputed Voxelized Shadows
Erik Sintorn, Viktor Kämpe, Ola Olsson and Ulf Assarsson

Producing high-quality shadows in large environments is an important and challenging problem for real-time applications such as games. We propose a novel data structure for precomputed shadows, which enables high-quality filtered shadows to be reconstructed for any point in the scene. We convert a high-resolution shadow map to a sparse voxel octree, where each node encodes light visibility for the corresponding voxel, and compress this tree by merging common subtrees. The resulting data structure can be many orders of magnitude smaller than the corresponding shadow map. We also show that it can be efficiently evaluated in real time with large filter kernels.

preprint pdf | bibtex | slides | executable | source code
Per-Triangle Shadow Volumes Using a View-Sample Cluster Hierarchy
Erik Sintorn, Viktor Kämpe, Ola Olsson and Ulf Assarsson
I3D 2014

Rendering pixel-accurate shadows in scenes lit by a point lightsource in real time is still a challenging problem. For scenes of moderate complexity, algorithms based on Shadow Volumes are by far the most efficient in most cases, but traditionally, these algorithms struggle with views where the volumes generate a very high depth complexity. Recently, a method was suggested that alleviates this problem by testing each individual triangle shadow volume against a hierarchical depth map, allowing volumes that are in front of, or behind, the rendered view samples to be efficiently culled. In this paper, we show that this algorithm can be greatly improved by building a full 3D acceleration structure over the view samples and testing per-triangle shadow volumes against that. We show that our algorithm can elegantly maintain high frame-rates even for views with very high-frequency depth-buffers where previous algorithms perform poorly. Our algorithm also performs better than previous work in general, making it, to the best of our knowledge, the fastest pixel-accurate shadow algorithm to date. It can be used with any arbitrary polygon soup as input, with no restrictions on geometry or required pre-processing, and trivially supports transparent and textured shadow-casters.

preprint pdf | bibtex | slides
Efficient Virtual Shadow Maps for Many Lights
Ola Olsson, Erik Sintorn, Viktor Kämpe and Ulf Assarsson
I3D 2014

Recently, several algorithms have been introduced that enable realtime performance for many lights in applications such as games. In this paper, we explore the use of hardware-supported virtual cubemap shadows to efficiently implement high-quality shadows from hundreds of light sources in real time and within a bounded memory footprint. In addition, we explore the utility of ray tracing for shadows from many lights and present a hybrid algorithm combining ray tracing with cube maps to exploit their respective strengths. Our solution supports real-time performance with hundreds of lights in fully dynamic high-detail scenes.

preprint pdf | bibtex
High Resolution Sparse Voxel DAGs (2013)
Viktor Kämpe, Erik Sintorn and Ulf Assarsson

We show that a binary voxel grid can be represented orders of magnitude more efficiently than using a sparse voxel octree (SVO) by generalising the tree to a directed acyclic graph (DAG). While the SVO allows for efficient encoding of empty regions of space, the DAG additionally allows for efficient encoding of identical regions of space, as nodes are allowed to share pointers to identical subtrees. We present an efficient bottom-up algorithm that reduces an SVO to a minimal DAG, which can be applied even in cases where the complete SVO would not fit in memory. In all tested scenes, even the highly irregular ones, the number of nodes is reduced by one to three orders of magnitude. While the DAG requires more pointers per node, the memory cost for these is quickly amortized and the memory consumption of the DAG is considerably smaller, even when compared to an ideal SVO without pointers. Meanwhile, our sparse voxel DAG requires no decompression and can be traversed very efficiently. We demonstrate this by ray tracing hard and soft shadows, ambient occlusion, and primary rays in extremely high resolution DAGs at speeds that are on par with, or even faster than, state-of-the-art voxel and triangle GPU ray tracing.

preprint pdf | bibtex
An Efficient Alias-free Shadow Algorithm for Opaque and Transparent Objects using per-triangle Shadow Volumes (2011)
Erik Sintorn, Ola Olsson, Ulf Assarsson

This paper presents a novel method for generating pixel-accurate shadows from point light-sources in real-time. The new method is able to quickly cull pixels that are not in shadow and to trivially accept large chunks of pixels thanks mainly to using the whole triangle shadow volume as a primitive, instead of rendering the shadow quads independently as in the classic Shadow-Volume algorithm. Our CUDA implementation outperforms z-fail consistently and surpasses z-pass at high resolutions, although these latter two are hardware accelerated, while inheriting none of the robustness issues associated with these methods. Another, perhaps even more important property of our algorithm, is that it requires no pre-processing or identification of silhouette edges and so robustly and efficiently handles arbitrary triangle soups. In terms of view sample test and set operations performed, we show that our algorithm can be an order of magnitude more efficient than z-pass when rendering a gamescene at multi-sampled HD resolutions. We go on to show that the algorithm can be trivially modified to support textured, semitransparent and colored semi-transparent shadow-casters and that it can be combined with either depth-peeling or stochastic transparency to also support transparent shadow receivers. Compared to recent alias-free shadow-map algorithms, our method has a very small memory footprint, does not suffer from load-balancing issues, and handles omni-directional lights without modification. It is easily incorporated into any deferred rendering pipeline and combines many of the strengths of shadow maps and shadow volumes.

preprint pdf | bibtex
Volumetric Shadows using Polygonal Light Volumes
Markus Billeter, Erik Sintorn, Ulf Assarson
High Performance Graphics 2010

This paper presents a more efficient way of computing single scattering effects in homogeneous participating media for real-time purposes than the currently popular ray-marching based algorithms. These effects include halos around light sources, volumetric shadows and crepuscular rays. By displacing the vertices of a base mesh with the depths from a standard shadow map, we construct a polygonal mesh that encloses the volume of space that is directly illuminated by a light source. Using this volume we can calculate the airlight contribution for each pixel by considering only points along the eye-ray where shadow-transitions occur. Unlike previous ray-marching methods, our method calculates the exact airlight contribution, with respect to the shadow map resolution, at realtime frame rates.

pdf | bibtex | video
Stochastic Transparency
Eric Enderton, Erik Sintorn, Peter Shirley, David Luebke
Interactive 3D Graphics and Games 2010 (original, shorter)
IEEE Transactions on Visualization and Computer Graphics 2011

Winner of best paper award.
Image selected for proceedings front cover.
Code available in DirectX SDK

Stochastic transparency provides a unified approach to order-independent transparency, anti-aliasing, and deep shadow maps. It augments screen-door transparency using a random sub-pixel stipple pattern, where each fragment of transparent geometry covers a random subset of pixel samples of size proportional to alpha. This results in correct alpha-blended colors on average, in a single render pass with fixed memory size and no sorting, but introduces noise. We reduce this noise by an alpha correction pass, and by an accumulation pass that uses a stochastic shadow map from the camera. At the pixel level, the algorithm does not branch and contains no read-modify-write loops other than traditional z-buffer blend operations. This makes it an excellent match for modern massively parallel GPU hardware. Stochastic transparency is very simple to implement and supports all types of transparent geometry, able without coding for special cases to mix hair, smoke, foliage, windows, and transparent cloth in a single scene.

TVCG pdf | bibtex
I3D pdf | bibtex
video1 | video2
Hair Self Shadowing and Transparency Depth Ordering Using Occupancy maps
Erik Sintorn, Ulf Assarson
Interactive 3D Graphics and Games 2009
Image selected for proceedings back cover.

This paper presents a method for quickly constructing a high quality approximate visibility function for high frequency semitransparent geometry such as hair. We can then reconstruct the visibility for any fragment without the expensive compression needed by Deep Shadow Maps and with a quality that is much better than what is attainable at similar framerates using Opacity Maps or Deep Opacity Maps. The memory footprint of our method is also considerably lower than that of previous methods. We then use a similar method to achieve back-to-front sorted alpha blending of the fragments with results that are virtually indistinguishable from depthpeeling and an order of magnitude faster.

pdf | bibtex
Sample Based Visibility for Soft Shadows using Alias-free Shadow Maps
Erik Sintorn, Elmar Eisemann, Ulf Assarsson
Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering 2008)

This paper introduces an accurate real-time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU-based alias-free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The method is extended to soft shadow sampling for an arbitrarily shaped area-/volumetric light source using 128-1024 light samples per screen pixel. The alias-free shadow map guarantees that the visibility is accurately sampled per screen-space pixel, even for arbitrarily shaped (e.g. non-planar) surfaces or solid objects. Another contribution is a smooth coherent shading model to avoid common light leakage near shadow borders due to normal interpolation.

pdf | bibtex
Real-Time Approximate Sorting for Self Shadowing and Transparency in Hair Rendering
Erik Sintorn, Ulf Assarson
Proceedings of the Symposium on Interactive 3D Graphics and Games (I3D 2008)
Image selected for proceedings front cover
Updated version presented as poster at GTC and as part of Beyond Programmable Shading course at SIGGRAPH

When rendering materials represented by high frequency geometry such as hair, smoke or clouds, standard shadow mapping or shadow volume algorithms fail to produce good self shadowing results due to aliasing. Moreover, in all of the aforementioned examples, properly approximating self shadowing is crucial to getting realistic results. To cope with this problem, opacity shadow maps have been used. I.e., an opacity function is rendered into a set of slices parallel to the light-plane. The original Opacity Shadow Map technique [Kim and Neumann 2001] requires the geometry to be rendered once for each slice, making it impossible to render complex geometry into a large set of slices in real time. In this paper we present a method for sorting n line primitives into s number of sub-sets, where the primitives of one set occupy a single slice, in O(nlog(s)), making it possible to render hair into opacity maps in linear time. It is also shown how the same method can be used to roughly sort the geometry in back-to-front order for alpha blending, to allow for transparency. Finally, we present a way of rendering self shadowed geometry using a single 2D opacity map, thereby reducing the memory usage significantly.

pdf | bibtex | GTC poster | video (SIGGRAPH course)
Fast Parallel GPU-Sorting Using a Hybrid Algorithm
Erik Sintorn, Ulf Assarson
Workshop on General Purpose Processing on Graphics Processing Units
Journal Of Parallel and Distributed Computing

This paper presents an algorithm for fast sorting of large lists using modern GPUs. The method achieves high speed by efficiently utilizing the parallelism of the GPU throughout the whole algorithm. Initially, a parallel bucketsort splits the list into enough sublists then to be sorted in parallel using merge-sort. The parallel bucketsort, implemented in NVIDIA’s CUDA, utilizes the synchronization mechanisms, such as atomic increment, that is available on modern GPUs. The mergesort requires scattered writing, which is exposed by CUDA and ATI’s Data Parallel Virtual Machine[1]. For lists with more than 512k elements, the algorithm performs better than the bitonic sort algorithms, which have been considered to be the fastest for GPU sorting, and is more than twice as fast for 8M elements. It is 6-14 times faster than single CPU quicksort for 1-8M elements respectively. In addition, the new GPU-algorithm sorts on n log n time as opposed to the standard n(log n)2 for bitonic sort. Recently, it was shown how to implement GPU-based radix-sort, of complexity n log n, to outperform bitonic sort. That algorithm is, however, still up to ~40% slower for 8M elements than the hybrid algorithm presented in this paper. GPU-sorting is memory bound and a key to the high performance is that the mergesort works on groups of four-float values to lower the number of memory fetches. Finally, we demonstrate the performance on sorting vertex distances for two large 3D-models; a key in for instance achieving correct transparency.

pdf | bibtex
2014-10-02 Added two scenes for people to use in their papers.

2013-02-26 Disputation complete! Dr. Sintorn continues as a postdoc in the same group

2012-02-01 As of Feb 1st I am employed at Autodesk and will be for about six months before returning to Chalmers to complete my Ph.D.

2011-10-25 Only a year or two after the demise of my old page, I have created a new homepage, complete with a 'news' list that I am as likely as not never to update again.

Below are a few of the scenes that I have made to test various algorithms in my papers. The objects in these scenes are all made by me, unless otherwise noted, and are all royalty free. If you're looking for a scene used in my papers that is not here, please email me and I'll see if I can make it available. You are free to use these files as you wish, but I would appreciate it if you emailed me if you do.

Closed Citadel
Copyright: Erik Sintorn 2014
A large open scene consisting of approximately 600k triangles. All objects in this scene are clsed and (I think) two-manifold. It was created to test the "closed object" optimization in "Compact Precomputed Voxelized Shadows".

Copyright: Erik Sintorn 2014
A little scene made specifically to stress-test shadow-volume algorithms. The gate is a royalty free object from TurboSquid by "BlackantMaster".