Simon Preeg, Supervising Animator at Digital Domain, talks to ART of VFX about the animation in OBLIVION. An Academy Award-winner for best visual effects in The Curious Case of Benjamin Button, Preeg’s work can also be seen in Tron: Legacy,Pirates of the Caribbean: At World’s End, Flags of Our Fathers, I, Robot, The Lord of the Rings: The Two Towers, The Lord of the Rings: The Return of the King, and King Kong.
special guests, Bill Polson, Dirk Van Gelder, Manuel Kraemer,
Takahito Tejima, David G. Yu and Dale Ruffolo, from Pixar Animation
Studios' GPU team, as they show how real time display of subdivision
surfaces helps artists be more productive, and how this code is open
source and engineered for ease of integration.
is a set of open source libraries that implement high-performance
subdivision surface (subdiv) evaluation on massively parallel CPU and
GPU architectures. The code embodies decades of research and
experience by Pixar.
us for this informative webinar and see how Autodesk and Pixar are
working together to build technology with stunning real-time results.
First trailer for Gravity is out! It's been a longtime coming! Pumped for all the Framestore artists that worked on the film! Proud to have been apart of the super talented animation team! You all rock!
In July 1989, Jim Henson and Frank Oz made an appearance at the Puppeteers of America national festival at MIT in Boston. The event was documented and added to the Puppeteers of America audio-visual archive with a special introduction by Frank Oz made in 1993. Throughout the one-hour discussion, Henson and Oz perform Kermit the Frog, Rowlf the Dog, Cookie Monster, Animal, Fozzie Bear and Grover. Some of the topics include the history of Henson Associates, research on Sesame Street, and behind the scenes info about the Muppets, The Dark Crystal, The Jim Henson Hour, and Little Shop of Horrors. Frank Oz also demonstrates how they perform the Muppets on film and television.
SIGGRAPH 2013 Paper Video: We introduce a real-time and calibration-free facial performance capture framework based on a sensor with video and depth input. In this framework, we develop an adaptive PCA model using shape correctives that adjust on-the-fly to the actor's expressions through incremental PCA-based learning. Since the fitting of the adaptive model progressively improves during the performance, we do not require an extra capture or training session to build this model. As a result, the system is highly deployable and easy to use: it can faithfully track any individual, starting from just a single face scan of the subject in a neutral pose. Like many real-time methods, we use a linear subspace to cope with incomplete input data and fast motion. To boost the training of our tracking model with reliable samples, we use a well-trained 2D facial feature tracker on the input video and an efficient mesh deformation algorithm to snap the result of the previous step to high frequency details in visible depth map regions. We show that the combination of dense depth maps and texture features around eyes and lips is essential in capturing natural dialogues and nuanced actor-specific emotions. We demonstrate that using an adaptive PCA model not only improves the fitting accuracy for tracking but also increases the expressiveness of the retargeted character.: