Industrial Presentations

IN1: Media Production

Room ARP // Wednesday, April 9th // 10:40 - 12:20

Overview of our IG software and the underlying 3D engine

Gregory Jaegy - Imagine 3D

Integrating 3D for web based cross media production

Jean-Marie Delorme - Dalim software

Pre-viz Technologies for Standard Production Workflows

Jean-Eudes Marvie - Technicolor

IN2: Visualization & Photography

Auditorium Schweitzer // Wednesday, April 9th // 14:00 - 15:40

Visualization and Processing of Highly Detailed Teravoxel Volume Data Sets

Klaus Engel - SIEMENS

WYSIWYG Computational photography via viewfinder editing

Kari Pulli - NVIDIA

IN3: Computational Graphics & Motion

Auditorium Schweitzer // Wednesday, April 9th // 16:30 - 18:10

Computational Graphics : An Overview of Graphics Research at NVIDIA

Cyril Crassin - NVIDIA

What You See is What You Capture - Real-Time Data Capture in Cinebox

Xiaomao Wu - CryTek

Disney’s Hair Pipeline: Crafting Hair Styles From Design to Motion

Maryann Simmons and Brian Whited - Walt Disney Animation Studios


Detailled program and abstracts

What You See is What You Capture - Real-Time Data Capture in Cinebox

Crytek Cinebox

Xiaomao Wu - CryTek

In this talk, we will introduce the LiveMocap pipeline we developed inside Cinebox® in the last years. The LiveMocap component enables game directors, designers and artists to capture the data and preview it in the final scenes in real-time, and review the capture data on the fly. The quality of the capture data is thus improved and productions time shortened with such a new pipeline. Case studies will be demonstrated with Crytek’s AAA titles include Ryse: Son of Rome® (Xbox One launch title) and Warface®.

The LiveMocap SDK we developed will also be introduced. The SDK enables users to make their own LiveMocap plugins with new mocap systems which can be plugged into Cinebox easily.

Xiaomao Wu

Dr. Xiaomao Wu is currently Lead Software Engineer of Cinebox at Crytek Frankfurt HQ, and Associate Editor of ACM Computers in Entertainment. After he finished his Ph. D study at Shanghai Jiao Tong University, he continued his postdoctoral research at INRIA. He is currently taking charge of the research and development of Cinebox. Before joining Crytek, he worked for Autodesk, Microsoft, and INRIA in computer graphics and animation. His major research has been published in IEEE CG&A, Computer Graphics Forum, ACM / Eurographics SCA and Computers & Graphics.

Disney’s Hair Pipeline: Crafting Hair Styles From Design to Motion

Walt Disney Animation Studios

Maryann Simmons and Brian Whited - Walt Disney Animation Studios

In this talk we will describe the hair pipeline utilized on Disney’s most recent full length animated feature, Frozen. Producing intricate hair styles is a challenging problem, spanning many departments. We focus on the generation of the hair groom and motion. This process starts by producing the groom, guided by 2D artwork from visual development and 3D proxies from modeling. We have developed a new intuitive interactive grooming tool, Tonic, which uses geometric volumes to procedurally groom the hairstyle. Once the hair volumes are sculpted, Tonic generates a set of guide curves within each Tonic hair tube. These tubes and guide curves are then passed to simulation which produces motion for a subset of the guide curves. The motion is controlled using an animation rig and a two-level simulation rig with the underlying dynamics calculated using our in-house solver. This motion is then mapped onto the full set of guide curves. In technical animation, cleanup and fine-tuning of the motion is done on a per-shot basis. Finally, the guide curves are interpolated and extra detail added using XGen, to produce the final set of curves sent to rendering. With this new workflow and toolset, the artists were able to create the almost 50 unique hair styles on Frozen.

Walt Disney Animation Studios

Visualization and Processing of Highly Detailed Teravoxel Volume Data Sets

SIEMENS

Klaus Engel - SIEMENS

Massive volumetric data sets from various application areas, such as oil & gas, simulation and non destructive testing, are rapidly growing in resolution and are surpassing the capacities of modern GPUs and host systems. We present a volume visualization framework that allows high quality interactive rendering of teravoxel volume data sets even on standard PCs. It is based on a progressive multiresolution out of core volume rendering approach. Data loading is controlled by rays that are cast through the volume data set. We employ a multi resolution hierarchy that is traversed and updated directly by the GPU during ray casting. Consequently, occluded and empty data is never loaded or rendered. The framework is able to render dense, anisotropic, regular, scalar volume data sets with all common rendering modes. We also show new algorithms for interactive editing, segmentation and measurement of such huge data sets.

Siemens Teravoxel

WYSIWYG Computational photography via viewfinder editing

NVIDIA

Kari Pulli - NVIDIA

Digital cameras with electronic viewfinders provide a relatively faithful depiction of the final image, providing a WYSIWYG experience. If, however, the image is created from a burst of differently captured images, or non-linear interactive edits significantly alter the final outcome, then the photographer cannot directly see the results, but instead must imagine the post-processing effects. This paper explores the notion of viewfinder editing, which makes the viewfinder more accurately reflect the final image the user intends to create. We allow the user to alter the local or global appearance (tone, color, saturation, or focus) via stroke-based input, and propagate the edits spatiotemporally. The system then delivers a real-time visualization of these modifications to the user, and drives the camera control routines to select better capture parameters. This paper was presented at SIGGRAPH Asia 2013.

Computational Graphics: An Overview of Graphics Research at NVIDIA

NVIDIA

Cyril Crassin - NVIDIA

This presentation will give a quick overview of some of the computer graphics-related research carried out in the research department of NVIDIA. NVIDIA Research explores challenging topics on the frontiers of visual, parallel, and mobile computing. Our current work spans many domains including computer graphics, physical simulation, scientific computing, computational photography, programming languages, circuit design, and computer architecture. We support advances in these fields through collaboration with academic and industrial research institutions, and disseminate results in technical conferences, journals, and other academic venues. The goal of this talk is to describe some recent publications and research projects from the visual computing team, which mainly focus on real-time rendering, computational graphics and illumination calculation on the GPU.

Pre-viz Technologies for Standard Production Workflows

Technicolor

Jean-Eudes Marvie - Technicolor

CGI animated films and VFX relie on massively detailed 3D scenes featuring hundreds of lights, high resolution textures and many complex shaders. Pre-visualization of such assets involve complex computations, usually requiring the use of production renderers such as MentalRay or RenderMan. However, the non-interactive render times (up to severalhours per frame) preclude any interactive setup of scene contents and lighting. Graphics hardware (GPU) accelerated solutions exists, they are generally based on progressive ray-tracing methods and thus very accurate to render shadows and reflections. However, they generally require the author to model the scenes in a specific way, for instance using proprietary shaders or lights, in order to get proper pre-viz. In this way they are not compatible with a standard production workflow based on Maya3/MentalRay. Other proprietary solutions exists but they are generally tightly bound to large CGI companies workflows and hardly applicable to other pipelines. In this talk, we present Technicolor's GPU pre-viz system compatible with standard Maya/MentalRay production workflows. This GPU-based pre-viz system has been developed upon Technicolor's research work in the field of real time rendering and pre-viz technology.

Overview of our IG software and the underlying 3D engine

Imagine3D

Gregory Jaegy - Imagine3D

In this talk, we will give a technical overview of our IG (image generator) software and the underlying 3D engine. Both have been in development for nearly ten years, and feature most of the rendering techniques published in the recent years, such as a deferred rendering pipeline, high-quality shadows, volumetric clouds, atmosphere scattering with dynamic visibility distance settings, multi-channel support, dynamic time-of-day, etc…

We will also give some technical information on geo-referenced terrain implementation, which features a unique lockless-multithreaded streaming system, geo-morphing, asynchronous vector rendering, dynamic height modification, volumetric snow accumulation (including dynamic snow clearance), and much more.

Imagine 3D engine

Integrating 3D for web based cross media production

Dalim software

Jean-Marie Delorme - Dalim software

With 3D applications gaining in popularity, either for Augmented Reality (AR) experience or for 3D printing purpose, Dalim software has identified an increasing demand in collaborative tools to edit/adapt those 3D models with the supply chain of manufacturing companies, AR or 3D print service providers and anyone else wanting to collaborate / use 3D models in their communication or business.

Applying Dalim software’s vast experience in collaborative tools for classic 2D document or packaging production, to the usage of a 3D engine enables users to combine collaborative review, composing and annotation workflows, which represents a unique and forward thinking approach and will allow tomorrow’s designers, producers and marketing departments to interact with 3D models as seamlessly as in classic print production.

With its web-based ES platfrom, Dalim software is bringing the necessary collaborative tools to a) import and manage generic 3D models and their metadata from existing repositories, b) allow a seamless collaborative annotation and versioning workflow of the 3D draft, in order to c) share the final 3D object to be compatible for Augmented Reality or 3D printing purposes.