2025.12.12 - The Last Of Us Season 2
Customer Story
Following their SIGGRAPH 2025 presentations on the creation of the hordes in The Last of Us Season 2, we reached out to Visual Effects studio, Wētā FX, headquartered in Wellington, New Zealand, (with teams in Vancouver and Melbourne) to find out more about their experience using Ragdoll in its production.
| no | header |
|---|---|
![]() Dennis YooAnimation Supervisor |
![]() Andre CastelaoSenior Motion Simulation Artist |
![]() Jason SnymanAnimation Supervisor |
![]() Geoff TobinMassive Expert |
History
| no | header |
|---|---|
![]() |
How long have you been using Ragdoll? How did you find out about it? |
![]() |
I’ve been using Ragdoll off and on for around three to four years. |
![]() |
We have been using ragdoll-type simulation for characters at Wētā FX since 2015 through other software and tools. Ragdoll Dynamics in Maya was released in 2021, and I started testing this on a few shots for Avatar: The Way of Water. |
![]() |
What part of the team uses Ragdoll the most? |
![]() |
Our Animation Department has several artists using Ragdoll. However, most of the complex tasks that require heavy simulations, such as multiple characters interacting with each other, are done in the Motion Editing Department. |
![]() |
In your experience, what issues is Ragdoll most useful for? |
![]() |
Ragdoll is mainly used for secondary motion; things like follow-through, collisions, or character reactions that add realism to performances. |
Introducing Ragdoll to production
| no | header |
|---|---|
![]() |
At what stage in the pipeline did you start applying Ragdoll to address these issues? Did you try other methods first? |
![]() |
We began testing Ragdoll on several of the wide shots, and the benefits quickly became apparent when Andre Castelao joined our show. He specialises in motion editing large groups using Ragdoll simulations, and his expertise helped us take full advantage of the tool’s potential for crowd dynamics. |
![]() |
In the past, we explored similar simulation techniques for crowd shots. Our current approach utilises the Ragdoll features in Maya or Houdini, with the Houdini simulations still in the early stages of testing. When working with crowd shots, we have to treat many of them as hero characters, keyframing many Ragdoll settings and working individually to achieve more effective results. |
![]() |
Could you tell us a bit more about how you introduce tools into a process like this? What were the key features you were looking for to solve the problem? |
![]() |
We typically test new tools in smaller, controlled scenarios before integrating them into larger sequences. For Ragdoll, the key features we needed were stability, scalability, and the ability to interact seamlessly with our existing motion-capture pipeline. Our crew is becoming more familiar with Ragdoll and several artists use it regularly, so using it for The Last of Us Season 2 was a no-brainer. |
![]() |
Why was Ragdoll the best solution for your challenges, specifically for The Horde? |
![]() |
It’s a tool we’ve been working with for a few years now, and our Animation Artists have become increasingly familiar with it. There are varying levels of proficiency across the team, but having specialised experts handle the larger horde shots was extremely helpful. Ragdoll allowed us to generate believable mass interactions while maintaining control over individual performance details. |
Crafting the Horde
| no | header |
|---|---|
![]() |
Could you give us a quick overview of how you approached creating the creature and character animations for The Last of Us? |
![]() |
Our approach was a blend of motion capture and keyframe animation. We relied heavily on motion capture for grounded performances and layered-in keyframed details for emotional beats or creature-specific movement. Ragdoll was introduced later in the process to enhance realism through secondary motion and physical interaction. |
![]() |
Most of the characters were similar setups from our base GEN man/woman rigging. The biggest challenge was replacing the motion of hundreds of Infected in the blocked shot, incorporating a Ragdoll setup for the numerous characters in the scene that needed to interact with each other. We established a workflow with scripts to replace the motion of the Infected with our Ragdoll setup, with the same motion applied to each GEN man/woman, triggering dynamics as required. The exported motion recorded from the sim is then applied back into the Infected character. |
Jargon
"GEN man/woman": In-house name for their generic human 3D model and rig
| no | header |
|---|---|
![]() |
What were the main challenges you faced during the process? |
![]() |
The main challenge was creating the motion of the horde. In some wide shots, we were populating motion for nearly one thousand individuals. Ragdoll was used for secondary reactions of dead Infected being trampled, where we needed to generate additional motion based on other performances, mainly from mocap. Another challenge was making the groups of motion-captured performances feel cohesive — as if the motion was living in the same spaces reacting to one another. |
![]() |
In the SIGGRAPH presentation, you mentioned Anatomy as one of the challenges Ragdoll helped solve. Could you tell us a bit about that? |
![]() |
It was mainly in our Ragdoll rig, which had been previously set up for our in-house GEN man and woman. These rigs were used to create large body piles of Infected, adhering to human anatomical limitations for joint positions. |
![]() |
How did Ragdoll help you overcome the Anatomy challenge (and any other that might have come up)? |
![]() |
It didn’t fully overcome anatomical challenges outside of dead body placement. When simulated, anatomical limits sometimes caused odd or extreme movements that needed correction. These were refined through Motion Editing in Nuance or keyframe adjustments in Maya. What Ragdoll does provide is a large volume of base secondary motion that can then be edited and polished. |
Jargon
Nuance: Third-party motion editing tool from 1993 (!) by BioMechanics Inc.
| no | header |
|---|---|
![]() |
You mentioned using it in combination with Massive. How did you combine both tools? |
![]() |
The majority of the background was populated without Ragdoll simulation. For certain shots, we had to add some Ragdoll simulations, replacing the Massive motions with motion editing/animated motions to add more detail and interaction. Other shots featured full crowd motions from Motion Editing that had Ragdoll simulations applied. |
![]() |
"We can run an initial crowd sim in Massive using motion we captured of performers hitting the wall and climbing over each other, but the interactions will only be approximate, with body parts colliding and intersecting. We can then export the motion from Massive as AMCs and import it into Ragdoll to run a more accurate physics simulation." |
Jargon
"AMC": ASF/AMC is a third-party motion capture format by Biovision
| no | header |
|---|---|
![]() |
Has using Ragdoll for The Last of Us changed the way you use the tool? Have you found new ways of using it that you didn't know about before? |
![]() |
I can see how powerful the tool can be, especially when you need large volumes of secondary motion. It’s made us more confident in applying it to complex group dynamics. |
![]() |
We used a similar Ragdoll workflow that was created for another film, Better Man, that had crowd shots with multiple characters colliding and interacting with each other. The complexity and quantity of crowd shots needed for The Last of Us Season 2 pushed some boundaries around how we could approach this faster with a tight turnaround. There was some clean up and editing work required after the simulations, but we often make fine adjustments across our simulations. |
Animating the Dogs
| no | header |
|---|---|
![]() |
Could you describe the process of animating the dog attack? |
![]() |
The dog motion was keyframed with motion capture, but Ragdoll was used to create secondary motion for some of the Infected. This was mainly when the Infected were more or less immobile, and the dogs were still shaking them around during interaction. |
![]() |
A full Ragdoll setup was created for the dogs and the Infected. Each interaction, from the fingers to the claws, had an impact on both parties involved. The weight of the dogs further enhanced the realism by pulling the Infected down when they made contact. |
![]() |
Did animating dogs create challenges with software you hadn’t encountered when animating humans? |
![]() |
There weren’t any software challenges. Dogs are challenging mainly because we have a limited amount of motion capture available. Our workflow consists of using as much motion capture as possible to maintain believability, while keyframing bespoke motion for specific shots. |
Check out our latest Case Study
The team at Mr. X (formerly Herne Hill) breaks down their workflow for In the Lost Lands
You can learn more about Ragdoll by visiting our blog and forum. If you have any questions or want to share your Ragdoll workflow don't hesitate to reach out.
About the team at Wētā FX
| no | header |
|---|---|
![]() |
Dennis joined Weta Digital in 2003 as a Creature Animator on The Lord of the Rings: The Return of the King, and has since worked on titles including King Kong, Avatar, The Adventures of Tintin, and the Planet of the Apes trilogy. |
![]() |
Andre joined Wētā FX in 2013, and currently works as a Senior Motion Editor and Motion Simulation Artist, prepping shots using motion capture data and simulations and developing ragdoll character simulation using Ragdoll Dynamics in Maya. |




