Keynote Speakers

Hod Lipson

Columbia University

Curious and Creative Machines


Hod Lipson is a professor of Engineering and Data Science at Columbia University in New York, and a co-author of the award winning book “Fabricated: The New World of 3D printing”, and “Driverless: Intelligent cars and the road ahead”. His work on self-aware and self-replicating robots challenges conventional views of robotics. Lipson directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative.


Can machines ask questions and generate hypotheses? Despite the prevalence of big data, the process of distilling data into scientific laws has resisted automation. Particularly challenging are situations with small amounts of data that is difficult or expensive to collect, such as in robotics and other physical sciences. This talk will outline a series of recent research projects, starting with self-reflecting robotic systems, and ending with machines that can formulate hypotheses, design experiments, and interpret the results, to discover new scientific laws. We will see examples from geology to cosmology, from classical physics to modern physics, from big science to small science.

Karen Liu

Georgia Institute of Technology

Human Motion in Dynamic Environments: From Robotics to Computer Animation and Back


Karen joined Georgia Tech in August 2007 in School of Interactive Computing. She was an assistant professor at the University of Southern California from January 2006, after she received her Ph.D and M.S. in 2005 and 2001 from the University of Washington. Her research interests are in computer graphics and animation, including physics-based animation, character animation, optimal control, numerical methods, robotics and computational biomechanics.


Creating realistic virtual humans has traditionally been considered a research problem in Computer Animation primarily for entertainment applications. With the recent breakthrough in collaborative robots and deep reinforcement learning, accurately modeling human movements and behaviors has become a common challenge faced by researchers in robotics, artificial intelligence, as well as Computer Animation. In this talk, I will focus on two different yet highly relevant problems: how to teach robots to move like humans and how to teach robots to interact with humans.

Invited Speakers

Nils Thuerey

Technische Universität München

Wrangling Physics Simulations in Academia and Industry


Nils Thuerey works on physically-based animations, with a particular emphasis on fluids effects, i.e., water and smoke. These simulations find applications as visual effects in computer generated movies and digital games. Examples of his work are novel algorithms to make simulations easier to control, to handle detailed surface tension effects, and to increase the amount of turbulent detail. Currently, Nils is an Assistant-Professor at the Technical University of Munich. Previously, he worked as R&D lead at ScanlineVFX, and as a post-doctoral researcher at ETH Zurich. In 2013, he received a tech-Oscar from the AMPAS for his work on the Wavelet Turbulence algorithm.


Physics simulations for phenomena such as smoke, explosions or water are by now well-established tools in VFX production pipelines, and at the same time they're well-established research communities in computer graphics. Despite their widespread use, it is still far from trivial to work with these simulations. Implementing and debugging numerical solvers typically introduces a range of problems, and integrating such systems into production pipelines is always a tricky task. In this talk I will outline typical issues that arise, and discuss some of the best practices I've encountered over the years. One interesting aspect here is how developing solvers differs for academic and industrial applications. While there's not always a clear one-size-fits-all solution, there are typical pitfalls that commonly arise. I will close the talk by discussing open problems in this area, and by outlining a selection of future research topics for the general area of physically-based animation. I believe that deep learning techniques are particularly interesting here, and fluids (as complex physics phenomena) pose very interesting research challenges for this direction.

Hao Li

University of Southern California

Digital Human Teleportation using Deep Learning


Hao Li is CEO/Co-Founder of Pinscreen, assistant professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focusses on digitizing humans and capturing their performances for immersive communication and telepresence in virtual worlds. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).


The age of immersive technologies will create a growing need for processing detailed visual representations of ourselves as virtual reality (VR) is growing into the next generation platform for online communication. A realistic simulation of our presence in such virtual world is unthinkable without a compelling and directable 3D digitization of ourselves. With the wide availability of mobile cameras and the emergence of low-cost VR head mounted displays (HMD), my research goal is to build a comprehensive and deployable teleportation framework for realistic 3D face-to-face communication in cyberspace. By pushing the boundaries in data-driven human digitization as well as bridging concepts in computer graphics and deep learning research, I will showcase several highlights of our current research, from photorealistic avatar creation from a single image, facial performance-sensing head mounted displays, and full-body dynamic shape capture. I will also introduce new deep learning tools for processing clothed 3D human bodies, inferring photorealistic 3D faces from unconstrained low resolution images, as well as demonstrate the latest highlights from Pinscreen.

Simon Clavet


Mocap, Ragdolls, and Staircases Animation Technology at Ubisoft


Simon studied Mathematics and Physics at the University of Montreal. He then obtained a Masters in computer science, which focused on viscoelastic fluid simulation. Since he joined Ubisoft in 2005, Simon developed an obsession for animation responsiveness, fluidity, and physical correctness. He leads Ubisoft Montreal’s animation research group, and he participated in the development of Splinter Cell Conviction, Avatar, FarCry 3, Assassin’s Creed 3, and For Honor.


Over the years, Ubisoft have been known to push animation quality and variety to new limits, with games such as Prince of Persia and Assassin’s Creed. In this non-technical talk, Simon will show various new technologies, including Motion Matching, used in the recently released action game For Honor. He will also discuss future avenues for responsive animation synthesis, looking into how Robotics and Deep Learning might soon revolutionize game animation.

Andy Nealen

New York University

Animation and Games: Emergent vs. Scripted, Freedom vs. Control, Ludology vs. Narratology, and Other Fun Culture Wars


Andy is an assistant professor of Computer Science at the NYU Tandon School of Engineering, where he is the co-director of the Game Innovation Lab, is affiliated with the NYU Game Center, and teach classes in Game Design, Computer Graphics, and Digital Shape Modeling. Hs is also a member of Hemisphere Games, and host the Indie Tech Talk series at NYU.


Finding a good balance between emergent and scripted behavior in computer animation, game design, and many other inherently aesthetic areas of art and science is an open and ongoing challenge—and one that is, more often than not, rather context specific. In this talk, through examples from a number of representative research and design projects—some my own, most from others—, I will explore the space between these extremes, present and discuss arguments from both sides, and figure out where various communities—including those I am part of—land on these spectra. More importantly, together with the audience, I will discuss how and why these things are in constant change, and how this change influences our ongoing dialectic.

Sergey Levine

UC Berkeley, EECS

Deep Reinforcement Learning


Sergey is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. He focus on the intersection between control and machine learning, with the aim of developing algorithms and techniques that can endow machines with the ability to autonomously acquire the skills for executing complex tasks.


Reinforcement learning provides us with a formalism for reasoning about sequential decision making problems at a very high level of generality, ranging from settings like inventory management to fine motor control. Deep learning allows us to represent very complex functions and process rich raw sensory data. Combining the two allows for very complex behavioral skills to be learned that combine sensing, reasoning, and control. In this talk, I will discuss how deep reinforcement learning is transforming robotics, optimal control, and computer graphics. I will discuss some of the tradeoffs of recent deep reinforcement learning methods, and how these techniques can be used to generate diverse and complex behaviors, both in simulated virtual environments and in the real world.