# Tutorials Track

#### TUT1: Simulating heterogeneous crowds with interactive behaviors

Room Dresde // Monday, April 7th // 9:00 - 10:10 and 10:40 - 12:00

1University of Pennsylvania, USA; 2Disney Research Zurich, Switzerland; 3George Mason University, US; 4University of Cyprus,Cyprus; 5Universitat Politecnica de Catalunya, Spain; 6University of Minnesota, USA.

Over the last decade there has been a large amount of work towards trying to simulate crowds for different applications, such as movies, video games, training, and evacuations. This course focuses on heterogeneous crowd simulation for interactive applications and will describe state of the art methods to simulate large groups of agents exhibiting a variety of behaviors, appearances and animations. We will present different techniques including psychological models and data-driven approaches that attempt to imitate real humans. We also present different systems to speed up both navigation, through multi-domain planners, and rendering, using per-joint impostors on fully animated 3D characters. Finally we provide quantitative and qualitative techniques to evaluate the quality of the simulated crowds, and include an overview of future research directions in the field.

#### TUT2: 3D video: from capture to diffusion

Room Boston // Monday, April 7th // 9:00 - 10:10 and 10:40 - 12:00

University of Reims Champagne-Ardenne, France.

While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century. This tutorial aims at introducing theoretical, technological and practical concepts associated to multiview systems. It covers acquisition, manipulation, and rendering. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide the necessary elements for understanding the underlying computer-based science of these technologies.

#### TUT3: Reasoning about Shape in Complex Datasets: Geometry, Structure and Semantics

Room Leicester // Monday, April 7th // 9:00 - 10:10 and 10:40 - 12:00

1Institute of Applied Mathematics and Information Technologies (IMATI), National Research Council, Italy;

2Phenomics and Bioinformatics Research Centre, University of South Australia and Australian Centre of Plant Functional Genomics, Australia.

In recent years, acquisition and modelling of 3D data has gained a significant boost due to the availability of commodity devices. Digital 3D shape models are becoming a key component in many industrial, entertainment and scientific sectors. Consequently, large collections of 3D data are nowadays available both in the public (e.g., on the Internet) as well as in private domains. Analyzing, classifying, and querying such 3D data collections are becoming topics of increasing interest in the computer vision, pattern recognition, computer graphics and digital geometry processing communities. The purpose of this tutorial is to introduce the fundamental mathematical tools for the analysis of collections of 3D models and overview the state-of-the-art techniques. We will first introduce some of the main challenges in shape analysis, underlining the role of Mathematics in the identification of the geometry, structure and semantics of a shape. We then overview the mathematical concepts that root on Differential Geometry and Topology and show examples about surface correspondence, retrieval and attribute transfer, to demonstrate how the surveyed concepts have been exploited in recent research works. We will overview the fundamental tools for the analysis of the variability in 3D shape collections. We will review statistical shape analysis techniques, outlining the potential of statistical analysis on non-linear manifolds. Finally, we discuss the potential of structural shape analysis to achieve a smart and semantic representation of a digital object. We conclude the tutorial with an overview of some (classical and non-classical) applications where 3D shape analysis plays a central role.

#### TUT4: Bayesian and Quasi Monte Carlo Spherical Integration for Illumination Integrals

Room Stuttgart // Monday, April 7th // 9:00 - 10:10 and 10:40 - 12:00

Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA).

The spherical sampling of the incident radiance function entails a high computational cost. Therefore the illumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. We need to ensure that sampling produces the highest amount of information possible by carefully placing the limited set of samples. Furthermore we want our integral evaluation to take into account not only the information produced by the sampling but also possible information available prior to sampling. In this tutorial we focus on the case of hemispherical sampling for spherical Monte Carlo (MC) integration. We will show that existing techniques can be improved by making a detailed analysis of the theory of MC spherical integration. We will then use this theory to identify and improve the weak points of current approaches, based on very recent advances in the fields of integration and spherical Quasi-Monte Carlo integration.

#### TUT5: Path Integral Methods for Light Transport Simulation: Theory & Practice

Room Dresde // Monday, April 7th // 13:30 - 16:00

1Charles University in Prague;
2Saarland University;

3Karlsruhe Institute of Technology;
4Next Limit Technologies.

We are witnessing a renewed research interest in robust and efficient light transport simulation based on statistical methods. This research effort is propelled by the desire to accurately render general environments with complex materials and light sources, which is often difficult with the currently employed solutions. In addition, it has been recognized that advanced methods, which are able to render many effects in one pass without excessive tweaking, increase artists’ productivity and allow them to focus on their creative work. For this reason, the movie industry is shifting away from approximate rendering solutions toward physically-based rendering methods, which poses new challenges in terms of strict requirements on image quality and algorithm robustness.

Many of the recent advances in light transport simulation, such as the robust combination of bidirectional path tracing with photon mapping (Vertex Connection and Merging / Unified Path Space), or the new Markov chain Monte Carlo methods are made possible by interpreting light transport as an integral in the space of light paths. However, there is a great deal of confusion among practitioners and researchers alike regarding these path space methods.

The goal of this tutorial is twofold. First, we present a coherent review of the path integral formulation of light transport and its applications, including the most recent ones. We show that rendering algorithms that may seem complex at first sight, are in fact naturally derived from this general framework. A significant part of the tutorial is devoted to the application of Markov chain Monte Carlo methods for light transport simulation, such as Metropolis Light Transport and its variants. We include an extensive empirical comparison of these MCMC methods. The second part of the tutorial discusses practical aspects of applying advanced light transport simulation methods in practical architectural visualization and VFX tasks.

#### TUT6: Turbulent Fluids

Room Boston // Monday, April 7th // 13:30 - 16:00

1UC Berkeley; 2TU Munich.

Over the last decade, the special effects industry has embraced physics simulations as a highly useful addition to its tool-set for creating realistic scenes ranging from a small camp fire to the large scale destruction of whole cities. The simulation methods used to create these effects are largely based on techniques originally developed to replace scientific experiments with computer simulations. In a direct application of this paradigm to movie making, we can now replace a real effects set, such as the staging an exploding house, with the simulated explosion of a virtual model of the house. This has some obvious advantages: it is more cost-effective, enables a wider variety of effects and of course it is far less dangerous for the people involved. But arguably, the desire to fine-tune and control effects in general is the primary reason why movie makers prefer the use of virtual tools over their traditional counterparts. Unfortunately, controlling the details of a violent phenomenon such as an explosion remains problematic even using numerical simulations. Due to the chaotic nature of turbulent fluids, such simulations tend be both computationally expensive and unpredictable. Small changes in initial conditions or a change of resolution will produce unexpected changes in the final motion, and make it hard for animators to obtain the desired behavior for the effect. For this reason, the following tutorial notes will focus on tools for augmenting existing coarse simulations with turbulent detail. This enables rich detail and visually interesting small-scale motion, but also allows for a practical multi stage work flow that gives artists control over large scale motion and small scale details separately.

Overall, this tutorial aims at providing an overview and practical guidelines to employing turbulence modeling techniques for fluid simulations. Turbulence has been a topic of research in classical fluid dynamics for a long time, and is discussed in a vast body of publications.

This tutorial will give a condensed overview of the central concepts, and introduce modeling techniques that are relevant for applications in Computer Graphics. More specifically, control and art direction of simulations are enabled with a two stage work flow - first, a rough initial simulation is conducted. In a second stage, turbulent effects are computed and applied to the simulation to increase its detail level. Motivated by these concepts, several approaches for increasing the visual detail of fluid simulations will be introduced. In addition to discussing single phase simulations, e.g. smoke and fire, we will also discuss the difficulties surrounding multi-phase liquid turbulence, and present a practical new algorithm for its simulation. As a central aim of this tutorial is to provide information on how to use turbulence theory for practical applications, source code examples for the methods covered will be made available. Additionally, the implementations will be discussed to provide starting points for navigating the source code.

The goal is to give developers interested in implementing powerful fluid solvers the knowledge to apply turbulence models, and to give artists who are curious about the technology a better understanding of when and how to make use of the different methods. This naturally also includes knowledge of the limitations of the various approaches; this tutorial therefore also provides guidelines and a discussion of the important pros and cons for each of the introduced methods. While the tutorial notes are structured based on papers in the field, most of the presented methods are modular. We encourage mixing and matching predictor and synthesis components from the various methods to find the best solution to a given problem.

#### TUT7: Dynamic 2D/3D Registration

Room Leicester // Monday, April 7th // 13:30 - 16:00

Ecole Polytechnique Federale de Lausanne.

Image and geometry registration algorithms are an essential component of many computer graphics and computer vision systems. With recent technological advances in RGB-D sensors, such as the Microsoft Kinect or Asus Xtion Live, robust algorithms that combine 2D image and 3D geometry registration have become an active area of research. The goal of this course is to introduce the basics of 2D/3D registration algorithms and to provide theoretical explanations and practical tools to design computer vision and computer graphics systems based on RGB-D devices. To illustrate the theory and demonstrate practical relevance, we briefly discuss three applications: rigid scanning, non-rigid modeling, and real time face tracking. Our course targets researchers and computer graphics practitioners with a background in computer graphics and/or computer vision. An up-to date version of the course notes as well as slides and source code can be found at http://lgg.epfl.ch/2d3dRegistration.

#### TUT8: Efficient Sorting and Searching in Rendering Algorithms

Room Stuttgart // Monday, April 7th // 13:30 - 16:00

Czech Technical University in Prague.

In the proposed tutorial we would like to highlight the connection between rendering algorithms and sorting and searching as classical problems studied in computer science. We will provide both theoretical and empirical evidence that for many rendering techniques the most of the processing time is spent by sorting and searching. In particular we will discuss problems and solutions for visibility computation, density estimation, and importance sampling. For each problem we will mention its specific issues such as dimensionality of the search domain or online versus offline searching. We will present the underlying data structures and their enhancements in the context of specific rendering algorithms such as ray shooting, photon mapping, and z-buffer based rendering. We will specifically discuss the differences when implementing data structures on CPUs and GPUs.

#### TUT9: An Introduction to Optimization Techniques in Computer Graphics

Part I // Room Orangerie // Monday, April 7th // 9:00 - 10:10 and 10:40 - 12:00

Part II // Room Orangerie // Monday, April 7th // 13:30 - 16:00

1Inria Bordeaux Sud-Ouest, France;
2Institut d'Optique, Talence, France;

3ICTEAM/ELEN, UCL, Belgium;
4HCI Heidelberg, Germany.

**Background:** Many students in Computer Science do not have a sufficient background in applied mathematics to employ state-of-the-art optimization techniques and to judge the outcome of such techniques critically (e.g. regarding the stability/quality/accuracy of their output). At the same time, the use of optimization techniques in computer graphics is becoming ubiquitous. Treating optimization algorithms as a black box yields sub-optimal results at best. At worst, stability issues and convergence problems may prevent the solution of a problem or impede the general application of a method to a wide range of input, i.e. beyond the set of examples shown in a paper. The course will draw attention to these aspects and to current best practices. This will enable participants to judge articles that use optimization schemes critically and improve their own skill set.

**Scope and Intended Audience:** For this purpose, we propose an introductory course on optimization techniques in computer graphics. We aim at thoroughly covering the basic techniques in optimization, only requiring a good working knowledge of the mathematical foundations in a standard CS curriculum, in particular, multi-dimensional analysis and linear algebra. Part of the course will be suitable for a starting PhD student. On the other end, our goal is to lead up to current research including modern ideas such as compressed sensing, convex variational formulations, and sparsity-inducing norms. We aim at exposing the major underlying ideas, exposing the working principles and giving hints for a successful implementation. The course thus also caters to the experienced researcher that seeks to utilize these modern techniques. We approach these goals by discussing a mixture of classic and more modern optimization approaches. Each section is presented by an expert in the area. Further, each section is comprised of two major parts: 1.) a condensed introduction of the necessary background and 2.) its application in particular graphics problems. We aim at giving implementation hints and the exposure of current-best-practices.

**Dissemination of Materials:** We will set up a course web page where associated material will be available after the conference.