States of the Art Reports
ST1: Physically-based Simulation of Cuts in Deformable Bodies: A Survey
Room Orangerie // Tuesday, April 8th // 8:30 - 10:10
Computer Graphics & Visualization Group, Technische Universität München, Germany
Virtual cutting of deformable bodies has been an important and active research topic in physically-based simulation for more than a decade. A particular challenge in virtual cutting is the robust and efficient incorporation of cuts into an accurate computational model that is used for the simulation of the deformable body. This report presents a coherent summary of the state-of-the-art in virtual cutting of deformable bodies, focusing on the distinct geometrical and topological representations of the deformable body, as well as the specific numerical discretizations of the governing equations of motion. In particular, we discuss virtual cutting based on tetrahedral, hexahedral, and polyhedral meshes, in combination with standard, polyhedral, composite, and extended finite element discretizations. A separate section is devoted to meshfree methods. The report is complemented with an application study to assess the performance of virtual cutting simulators.
ST2: SPH Fluids in Computer Graphics
Room Orangerie // Tuesday, April 8th // 10:40 - 12:20
1 University of Freiburg, 2 University of Siegen, 3 ETH Zurich
Smoothed Particle Hydrodynamics (SPH) has been established as one of the major concepts for fluid animation in computer graphics. While SPH initially gained popularity for interactive free-surface scenarios, it has emerged to be a fully-fledged technique for state-of-the-art fluid animation with versatile effects. Nowadays, complex scenes with millions of sampling points, one- and two-way coupled rigid and elastic solids, multiple phases and additional features such as foam or air bubbles can be computed at reasonable expense. This state-of-the-art report summarizes SPH research within the graphics community.
ST3: A Survey of Color Mapping and its Applications
Room Orangerie // Wednesday, April 9th // 8:30 - 10:10
1 Technicolor R&D, France, 2 University of Saint Etienne, France
Color mapping methods aim to recolor a given image or video by deriving a mapping between that image and another image serving as a reference. This class of methods has received considerable attention in recent years, both in academic literature and in industrial applications. Methods for recoloring images have often appeared under the labels of color correction, color transfer or color balancing, to name a few, but their goal is always the same: mapping the colors of one image to another. In this report, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We discuss the relative merit of each class of techniques through examples and show how color mapping solutions can and have been applied to a diverse range of problems.
ST4: “Look me in the eyes”
Room Orangerie // Wednesday, April 9th // 10:40 - 12:20
1 Trinity College Dublin, Ireland, 2 University of Wisconsin, Madison, United States, 3 KTH Royal Institute of Technology, Stockholm, Sweden, 4 University of Louvain, Belgium, 5 University of Pennsylvania, Philadelphia, United States
A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ''The face is the portrait of the mind; the eyes, its informers.''. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions.
This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics.
ST5: State-of-the-Art-Report on Real-time Rendering using Hardware Tessellation
Room Orangerie // Wednesday, April 9th // 14:00 - 15:40
1 Stanford University, 2 University of Erlangen-Nuremberg, 3 Microsoft Research
For a long time, GPUs have primarily been optimized to render more and more triangles with increasingly flexible shading. However, scene data itself has typically been generated on the CPU and then uploaded to GPU memory. Therefore, widely used techniques that generate geometry at render time on demand for the rendering of smooth and displaced surfaces were not applicable to interactive applications. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit's ability to overcome this limitation, complex geometry can now be generated within the GPU's rendering pipeline on the fly. GPU hardware tessellation enables the generation of smooth parametric surfaces or application of displacement mapping in real-time applications. However, many well-established approaches in offline rendering are not directly transferable, due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this state of the art report, we provide an overview of recent work and challenges in this topic by summarizing, discussing and comparing methods for the rendering of smooth and highly detailed surfaces in real-time.
ST6: Data-driven video completion
Room Orangerie // Wednesday, April 9th // 16:30 - 18:10
1 Tel-Aviv University 2 The Interdisciplinary Center
Image completion techniques aim to complete selected regions of an image in a natural looking manner with little or no user interaction. Video Completion, the space-time equivalent of the image completion problem, inherits and extends both the difficulties and the solutions of the original 2D problem, but also imposes new ones - mainly temporal coherency and space complexity (videos contain significantly more information than images). Datadriven approaches to completion have been established as a favored choice, especially when large regions have to be filled. In this report we present the current state-of-the-art in data-driven video completion techniques. For unacquainted researchers, we aim to provide a broad yet easy to follow introduction to the subject and early guidance to the challenges ahead. For a versed reader, we offer a comprehensive review of the contemporary techniques, sectioned out by their approaches to key aspects of the problem.
ST7: Quantifying 3D shape similarity using maps: Recent trends, applications and perspectives
Room Orangerie // Thursday, April 10th // 8:30 - 10:10
1 Istituto di Matematica Applicata e Tecnologie Informatiche, Genova - CNR, Italy, 2 School of Electrical Engineering, Tel Aviv University, Israel, 3 Institute of Computational Science, University of Lugano (USI), Switzerland
Shape similarity is an acute issue in Computer Vision and Computer Graphics that involves many aspects of human perception of the real world, including judged and perceived similarity concepts, deterministic and probabilistic decisions and their formalization. 3D models carry multiple information with them (e.g., geometry, topology, texture, time evolution, appearance), which can be thought as the filter that drives the recognition process. Assessing and quantifying the similarity between 3D shapes is necessary to explore large dataset of shapes, and tune the analysis framework to the user's needs. Many efforts have been done in this sense, including several attempts to formalize suitable notions of similarity and distance among 3D objects and their shapes.
In the last years, 3D shape analysis knew a rapidly growing interest in a number of challenging issues, ranging from deformable shape similarity to partial matching and view-point selection. In this panorama, we focus on methods which quantify shape similarity (between two objects and sets of models) and compare these shapes in terms of their properties (i.e., global and local, geometric, differential and topological) conveyed by (sets of) maps. After presenting in detail the theoretical foundations underlying these methods, we review their usage in a number of 3D shape application domains, ranging from matching and retrieval to annotation and segmentation. Particular emphasis will be given to analyse the suitability of the different methods for specific classes of shapes (e.g. rigid or isometric shapes), as well as the flexibility of the various methods at the different stages of the shape comparison process. Finally, the most promising directions for future research developments are discussed.
ST8: State of the Art in Surface Reconstruction from Point Clouds
Room Orangerie // Thursday, April 10th // 10:40 - 12:20
1 Air Force Research Laboratory, Information Directorate, 2 École Polytechnique Fédérale de Lausanne, 3 Inria Sophia-Antipolis - Mediterranee, 4 Clemson University, 5 Ben-Gurion University, 6 Polytechnic Institute of New York University
The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations - not necessarily the explicit geometry. This state-of-the-art report surveys the field of surface reconstruction, providing a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, this report provides a detailed characterization of the field, highlights similarities between diverse reconstruction techniques, and provides directions for future work in surface reconstruction.
ST9: Artistic Editing of Appearance, Lighting, and Material
Room Orangerie // Thursday, April 10th // 14:00 - 15:40
1 Karlsruhe Institute of Technology, 2 Sapienza University of Rome, 3 Université de Montréal, 4 Disney Research, Zürich
Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature-film, architecture and medical industries. Images with well-designed shading are an important tool for conveying information about the world, be it the shape and function of a CAD model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly-trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting, and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this STAR we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.
ST10: Practice and Theory of Blendshape Facial Models
Room Orangerie // Friday, April 8th // 9:00 - 10:40
1 Victoria University, 2 OLM Digital, 3 Google Inc, 4 University of Houston
“Blendshape”, a simple linear model of facial expression, is the prevalent approach to realistic facial animation. It has driven animated characters in Hollywood films, and is a standard feature of commercial animation packages. The blendshape approach originated in industry, and became a subject of academic research relatively recently. This report describes the state of the art in this area, covering both literature from the graphics research community, and developments published in industry forums. We show that, despite the simplicity of the blendshape approach, there remain open problems associated with this fundamental technique.