top of page

66 results found with an empty search

  • And the Oscar goes to... Stunts

    In a move that's been decades in the making, the Academy of Motion Picture Arts and Sciences has officially announced the creation of a new competitive Oscar category: Achievement in Stunt Design . The award will debut at the 100th Academy Awards in 2028, recognizing work from films released in 2027 — a milestone that industry professionals have been demanding for years. THE HIDDEN ARCHITECTS OF ACTION Stunt performers have been part of cinema since the silent era. Think Charlie Chaplin doing his own pratfalls, Buster Keaton dodging falling buildings, or Yakima Canutt inventing the art of the moving horse-to-wagon transfer for John Ford. They were there from the beginning — but mostly in the shadows. While stars collected awards and audiences held their breath, the people throwing themselves off buildings, getting hit by cars, or set on fire were rarely acknowledged. And that invisibility? It was by design. Not out of malice, but out of necessity. After all, cinema is an illusion. And a great stunt is meant to serve the story (yes, even the fights mean something) and to give our heroes and villains amazing abilities. All without drawing attention to the risk involved. “It’s the contract we sign up for: we’re not supposed to be seen”, David Leitch, director and stuntman, LA Times For generations, stunt performers have perfected the art of disappearing into  the action, embodying characters and executing dangerous feats. They are meticulous planners, skilled athletes and dedicated artists, often overlooked in the celebration of filmmaking. And even sometimes absent from the credits. Literally. Being erased from film credits was a common practice, especially when studios wanted to maintain the illusion that actors did their own stunts. That’s why it was news when, in season three, The Mandalorian  correctly credited the men  (not man ) behind the mask. Mando was a character built by three different bodies and one voice. And yet, most viewers have no idea. That’s how good the illusion is. From left to right:   Pedro Pascal  – the voice and image of Mando;   Brendan Wayne  – the gunslinger ;   Lateef Crowder  – the martial artist/sword master | Image from ScreenRant This lack of recognition has created a frustrating paradox: the very people responsible for some of cinema’s most breathtaking moments have been denied the industry’s highest honor. THE TAURUS AWARDS: STUNT WORK’S UNDERGROUND OSCARS Until now, the most prestigious recognition for stunt professionals has been the Taurus World Stunt Awards . Launched in 2001, the event was created by Red Bull founder Dietrich Mateschitz. It honors categories like 'Best Fight', 'Best High Work' and 'Hardest Hit', but remains largely outside the mainstream eye. The 2025 Taurus Awards are set for May 10. A SEAT AT THE TABLE – FINALLY The Academy’s announcement on April 10, 2025, confirmed that a new Oscar will be introduced for Stunt Design  at the 2028 ceremony. Unlike past honorary awards or brief mentions during montages, this will be a fully-fledged, voted category. “We are proud to honor the innovative work of these technical and creative artists, and we congratulate them for their commitment and dedication in reaching this momentous occasion”, Academy CEO Bill Kramer and Academy President Janet Yang But what exactly does “stunt design” mean? According to John Wick  director and longtime stunt professional Chad Stahelski , calling the category "Stunt design" was a smart compromise. It acknowledges how collaborative stunt work truly is. As he explains in an interview with Vulture , bringing a stunt to life on screen involves far more than just a single performer. You have choreographers, stunt doubles, camera operators, riggers, safety coordinators, rehearsal teams, VFX artists, editors — and more, depending on the complexity of the scene. It’s all of these departments working in sync. A team. BTS of John Wick: Chapter 3 – Parabellum | Image from Film Independent “I have a team of ten guys that helps me choreograph. I have three other fight choreographers that are coming from Japan, China, France. I have two stunt-riggers that design how I do the wild gags. My idea is the gag. Their idea is how to do it safely. Camera guys shoot it. My editor helps me edit it. VFX helped me erase the wires. That’s pretty fucking collaborative”, Stahelski on John Wick. Though the rules for eligibility will be announced in 2027, the new Oscar is expected to be awarded to a team recognizing the collaborative nature of the craft. A BIT OF HISTORY: THE IRONY AND THE DEBATE The fight for recognition isn’t new. In 1967, legendary stuntman Yakima Canutt received an honorary Oscar for his groundbreaking work in stunt safety and rigging. Hal Needham  — a stunt coordinator and performer on more than 30 films from the 1950s through the ’70s — was honored with a similar award at the 2013 Governors Awards, as reported by The Hollywood Reporter.   Action Heroes Owe Everything to Stunt Pioneer Yakima Canutt | WIRED Nearly fifty years later, Jackie Chan  was also given an Honorary Award for his “lifetime achievement” in 2016 — a nod for redefining action cinema with death-defying choreography. But these were exceptions. For years, industry insiders have lobbied for an official Oscar category for stunt work. As early as the 1990s, the Stuntmen’s Association of Motion Pictures   began campaigning for change, according to LA Times . The irony hit a high point in 2020, when Brad Pitt won an Oscar  for portraying a stuntman in Once Upon a Time... in Hollywood  — a fictional character inspired by real professionals who, at the time, still had no Oscar category of their own. The conversation reignited with The Fall Guy  (2024), a movie about — and for — stunt performers, released into an industry that still hadn’t formally recognized their work. Until, finally, a year later — the Academy announced it. And much of the credit goes to director David Leitch, who helped lead the charge while promoting the film. “This has been a journey for so many of us. Chris O’Hara and myself have invested several years into this. We built on the work of all the stunt designers who fought so hard for this in the past over the past decades. We are very grateful. Thank You @theacademy ”, he said in an Instagram  post.

  • How (not) to slow-mo?

    Zack Snyder's "Rebel Moon" has ignited discussions about the use (or abuse) of slow motion in films. Known for his stylistic, extra-slow-motion sequences, Snyder has faced criticism for employing this technique excessively in both parts of his latest epic sci-fi movies, with many going so far as to say he has "ruined it". Rebel Moon - part 2 | ©Netflix Let's not mince words: the director has disappointed his Snyder-cult (us included), prompting reflection on where things went wrong. With "Rebel Moon", Snyder has shown that impressive visuals alone cannot compensate for a lack of compelling story and character development. To avoid repeating his mistakes, we've decided to dig deeper into the technique: what exactly is slow motion and how can it be used effectively? WHAT IS SLOW MOTION? Slow motion, often abbreviated as slow-mo, is a technique that creates the illusion of time moving slower than normal. This effect is achieved by recording footage at a higher frame rate than it is played back. For example, filming at 120 frames per second (fps) and then playing it back at 24 fps results in a scene appearing five times slower than real-time. In fact, some cameras are specifically designed for capturing high-frame-rate footage, such as the iconic Phantom camera. Dredd | ©DNA Films The technique was pioneered by August Musger , an Austrian priest and physicist. Initially designed to mitigate flickering in early cinema projectors, his invention unintentionally introduced the concept of slow motion. Musger patented his device in 1904, laying the foundation for what would eventually become a fundamental tool in filmmaking. WHEN TO USE SLOW-MO AND WHY? Enhancing visual impact might seem the primary role of slow motion, yet its potential extends beyond this initial function. It allows viewers to immerse themselves in intricate details, amplifying the drama, action, or thematic elements of a scene. However, like any cinematic technique, moderation is key to maintaining its effectiveness. Overuse can diminish its impact. Therefore, select carefully specific moments where this effect serves a purpose beyond mere visual appeal. Here are a few instances where employing slow motion can give that extra “umph” to your movie:   Enhance Emotional Impact Slow motion enhances emotional impact by enabling the audience to absorb every detail of a powerful moment. It is particularly effective in capturing the subtle expressions and reactions of characters during pivotal scenes, such as death scenes like Gwen's in "The Amazing Spider-Man 2”. Yeah… that one. Highlight Action In action sequences, slow motion emphasizes the choreography and intensity of movements, providing directors with an opportunity to showcase critical story elements. This can include the protagonist’s technique, strategic decisions and the high stakes involved. Slow-mo transforms fast-paced action into a comprehensible visual experience for the audience, allowing us to appreciate essential details that might otherwise be missed. For example, in "Puss in Boots", slow-mo is used during Puss' first confrontation with his nemesis, Death, highlighting the moment he is actually injured for the first time, making him defeatable. Create Suspense By decelerating time, slow motion builds suspense and anticipation. This technique is invaluable in thriller or horror films, but its application extends beyond these genres. It intensifies the audience anxiety, as we anticipate the resolution of tense situations. In " Final Girls ", a film that humorously plays with horror tropes, slow motion is employed in a long sequence where the supposedly defeated killer unexpectedly reappears, chasing our characters through the forest. Surreal and Narrative Moments But slow motion can also play a distinct role in narrative storytelling. For instance, in "Dredd" (2012), a drug known as “Slo-Mo” alters perception by slowing down time, intensifying colors and creating a whimsy atmosphere. Each use of this drug transports us into this kind of surreal world, starkly contrasting with the violent scenes depicted. Similarly, filmmaker Lars von Trier uses the technique in several of his films to evoke a surreal ambiance, like the Hypnosis sequence in " Antichrist ". Visual Aesthetics It's fair to acknowledge that slow motion can enhance the visual appeal of any scene, making mundane actions appear extraordinary. However, this should not be the principal reason for its use. Choose wisely, as this is precisely where Snyder sinned. By turning everything (and we mean it) into a visual spectacle, he lost his audience interest. A perfect example of this is the now infamous farming scene in "Rebel Moon 2”.  Despite Snyder's explanation that the plant is crucial to the plot, the excessive sloooooooooow-mo of people harvesting, made it feel more like a bad advertisement, rather than a cinematic experience. Puss in Boots | ©DreamWorks Animation L.L.C 300 VS. REBEL MOON In "300", Zack Snyder's use of slow motion was sparingly employed to enhance storytelling, particularly in battle scenes where it underscored the Spartans' skill and bravery. Each slow-mo shot had a clear purpose, contributing to the film's epic and stylized tone, reminiscent of the graphic novel it portrayed. Conversely, "Rebel Moon" utilizes slow motion excessively, seemingly because it aligns with Snyder's stylistic preferences. Unfortunately, this overuse diminishes pacing and narrative cohesion (not that there is much to begin with, but that’s another story) and actually disconnects the audience. Instead of accentuating pivotal moments, the frequent slow-mo scenes make nothing feel truly significant. They lacked narrative justification . This highlights two crucial lessons from Snyder on how not to use slow-mo : Visual spectacle alone cannot compensate for weak dialogue and underdeveloped characters. Redundancy makes it feel boring and takes away the excitement. In conclusion, slow motion is a potent tool in filmmaking when applied judiciously. It has the potential to heighten emotional and visual impact when used purposefully and precisely. However, as seen in "Rebel Moon", excessive reliance on this effect can detract from the overall experience.

  • Oscars 2025: Best Visual Effects Nominees

    “All practical”, “ no CGI ”, the war is still ongoing. However, it is fascinating that the Oscars don’t seem to make such distinction (well, almost...). Practical or digital? Those are encapsulated under the same category “Best visual effects”. Production concepts of Wicked shared and done by Oliver Beck This year’s nominations include two films heavily marketed as using "only" practical effects, but we already have some breakdowns of all the scenes where VFX (and CG—for those making the distinction) are used. And it works! Both practical and digital effects have evolved a lot since their inception and it’s thanks to both of them that we have great movies (and some flops, of course!). On the other hand, we also have two very different takes on motion-captured apes performances, and 2024's most beloved sci-fi epic. Here’s the breakdown of this year’s nominees: ALIEN: ROMULUS – FEDE ALVAREZ Touted as a film relying entirely on practical effects, Alien: Romulus  was a return to old-school filmmaking techniques. While this claim holds weight, we’re glad to see behind-the-scenes footage revealing how CGI was also used to bring some of the film's most gruesome creatures and scenes to life —along with a few other things (asteroid belts, spaceships, floating acid, etc.). Practically speaking? Plenty was done in-camera, including our favorite: the swimming facehugger, specifically designed to leap from the water towards its prey. Video from WetaWorkshop  Instagram According to IMDb , 23 companies contributed to the film's effects, covering everything from concept art and previs to practical effects, miniatures, and digital work . It’s exciting to see filmmakers putting in the effort to revive the practical side of effects, despite the cost and complexity. However, the way the movie has been marketed—as though CGI is the “evil sibling”—feels unnecessary. “We went all the way to create creatures with (...) the philosophy of the old movies but with technology of today, to create something that people don’t see on screen everyday”, director Fede Alvarez told the Hollywood Reporter . WICKED  - JON M. CHU Much like the Romulus  campaign, Wicked   has been marketed as "real" and "tangible", boasting impressive feats like planting nine million tulips. And, again, this is true—they built immense, magical sets and went all-in on practical elements. But let’s not ignore the significant digital contributions that helped bring the movie to life. Fortunately, we’re starting to see breakdowns and behind-the-scenes footage from Framestore , showcasing the amount of work of thousands of artists from both the practical and digital realms. This transparency is refreshing—especially when compared to Barbie’s  infamous behind-the-scenes “grey screens”, which sparked online debates about the studios’ fakeness and their reluctance to reveal just how much VFX was actually used. DUNE: PART TWO – DENIS VILLENEUVE The Oscar for Best Visual Effects at the 97th Academy Awards was awarded to Dune: Part Two (Paul Lambert, Stephen James, Rhys Salcombe and Gerd Nefzer). No surprises here—Denis Villeneuve’s epic sequel once again delivered jaw-dropping visuals. From massive sandworms to the sprawling deserts of Arrakis, the film has lots of incredible and epic scenes. A mix of techniques were used, but we already covered the ones we liked the most in this article . Dune: Part Two | Official Trailer | ©Warner Bros KINGDOM OF THE PLANET OF THE APES – WES BALL Motion capture continues to evolve and innovate, and this film is the ultimate proof. With its hyper-realistic ape characters, Kingdom of the Planet of the Apes showcases how CG has reached new levels, enabling to create emotion and believable characters. While CG-heavy movies are (mostly) frowned upon today, the Planet of the Apes franchise has managed to sidestep this stigma . Even though we know  those apes don’t exist, we remain fully invested in the story. The integration of visual effects doesn’t detract from the film—it elevates it and the artistry is undeniable. “Through those three apes’ films that technology [mocap] improved, became robust. We were able to take it outside in the rain or in the snow”, Eric Winquist, VFX supervisor BETTER MAN – MICHAEL GRACEY “We usually do stunts and fights, and then we were thrown into dancing apes”, Emma Cross, Weta FX motion capture. Weta FX were the ones that took on the challenge of creating a singing, dancing ape—a task a bit outside their usual expertise. This required them to innovate and adapt their workflow to make it work. “We had to make a lot of motion studies there, to sort of work out how you convincingly make all that sound and all that energy and all that breath come out of this CG character”, Dave Clayton, Weta FX Previs & Animation supervisor The decision to transform Robbie Williams into an ape wasn’t arbitrary. According to the director, it reflected how Williams sees himself, placing the biopic in the realm of magic realism. This creative choice brought a new layer of depth to the storytelling, offering audiences something we haven’t really seen before. ** Which of these movies do you think should win the Oscar for Best Visual Effects? Or is there another film you believe should have made the list? Let us know!

  • OpenAI’s Super Bowl Ad

    This wasn’t the article we planned to write today, but given the buzz, it felt necessary. We’ve covered AI advancements and controversies before, but this one stands out for its irony. The Super Bowl is known for its high-profile, creative advertisements, with each 30-second slot costing a literal fortune. This year, among the usual array of big-budget ads (including many related to AI ), one stood out—not specifically for its message, but for what it revealed between the lines. OpenAI made its Super Bowl debut with The Intelligence Age , a 60-second ad that reportedly cost $14 million, according to The Verge . The spot features pointillism-inspired animation, transforming dots into iconic milestones of human progress—from fire and the wheel to DNA sequencing and space exploration. It culminates with ChatGPT assisting with everyday tasks like business planning and language tutoring, positioning AI as the next great leap in human innovation. A compelling message, sure. But what really got people talking, at least in our algorithm? The fact that this AI-powered company, championing the future of generative technology, didn’t use AI to create its ad. “The ad itself is a signal of how AI can assist—not replace, but aid and enhance—a human-led creative effort”, as described in OpenAI’s blog  about the ad. THE IRONIC TWIST For the past two years, AI-generated content has flooded the internet, with endless claims that it spells the death of traditional creative industries. Yet, when it came to its own high-profile Super Bowl moment, OpenAI opted for traditional, human-made advertising animation rather that showcasing the technology it is pushing. To produce it, they partnered with Accenture Song . According to OpenAI CMO Kate Rouch, they did use their generative video model, Sora , for early prototyping, camera animations and rapid iteration . However, the final animation was crafted entirely by human artists. “This is a celebration of human creativity and an extension of human creativity”, Open AI CMO, Kate Rouch The decision seems in and on itself… surprising. The marketing industry is already heavily turning to AI for generating visuals, copy and even entirely finished commercials—largely because it’s cheaper. But is it better? OpenAI's choice suggests that, when the stakes are high, human creatives are still the best bet.   So… is it all just silicon valley hype? AI ADS AND CONTROVERSY Still from the ad "Crush!" of Apple We’re in a moment of disruption. Backlash is inevitable. Fear is palpable. In advertising, we’ve already seen brands face intense criticism for their use of AI. However, OpenAI’s approach was a calculated marketing move, allowing them to sidestep the kind of backlash that other AI-powered ads have faced. Oh, you don't remember? Let us give you a refresher, with our top three most controversial AI related ads: Coca-Cola’s AI-Generated Christmas Ad  attempted to deliver a heartwarming holiday message, but instead, it was criticized for feeling "uncanny" and "soulless". Many argued that it lacked the warmth and authenticity that define classic holiday campaigns. Apple’s "Crush" iPad Ad  sparked outrage by showing creative tools—musical instruments, books, and art supplies—being crushed into an iPad. Critics called it a tone-deaf metaphor for technology replacing traditional creative methods. Google’s Gemini Olympics Ad  was pulled after backlash. The ad, which featured an AI writing a heartfelt letter from a young girl to her favorite athlete, was seen as diminishing human emotion in favor of automation. SO, WHAT DOES IT ALL MEAN? Generative AI is improving by the second and as we’ve said before, the question isn’t if we should use it, but how  we use it. OpenAI’s Super Bowl ad makes one thing clear: AI is a tool, not a replacement. And, for once, it's nice to see that humans are still the ones shaping the (hi)story.

  • Behind the VFX of 'The Last of Us' and '3 Body Problem'

    The Neuchâtel International Fantastic Film Festival (NIFFF) stands out as a premier Swiss event for genre film enthusiasts, showcasing fantastic, action-packed, and sometimes gory films. Needless to say, it’s an event we do not miss, and not only for the movies. There are also very interesting conferences on different subjects. This Monday, we attended two sessions on visual effects (VFX) as part of the NIFFF Extended program. THE LAST OF US - STORM STUDIOS Presented by Espen Nordahl, VFX Supervisor at Storm Studios in Norway, this conference delved into the intricate work behind the VFX for the hit series "The Last of Us". Nordahl’s team was responsible for 150 shots across six episodes, with around 30 people contributing over roughly five months of post-production. They handled complex shots involving “cordyceps” tendrils emerging from the mouths of the infected, face replacements, blue screen work and creating wounds and bite marks. One key difference between the game and the series was the method of infection. In the game, characters are infected via spores, but the series opted for direct physical contact to emphasize the human element. “You’re fighting against humans, not things and that's what the directors wanted”, explained Nordahl. This decision required the Storm studio team to conceptualize and test various prototypes for the cordyceps – you know, those weird, disgusting tendrils that come out of the mouths of the infected. Being involved very early in the pre-production allowed them to test different prototypes for this very important element throughout the series. "Our motto is to fail fast, get ideas quickly”, Nordahl This way, they can show early iterations to the directors, have feedback early on and determine which direction to take. This process was crucial in refining the cordyceps’ movement, aiming for a natural yet eerie effect. They did lots of trials, some too aggressive, others too limps, falling out the mouth like noodles, until they finally settled on a more plant-like, organic slow movement, reminiscent of how plant roots would behave. The final effect was achieved through a combination of hand animation and simulation, ensuring the tendrils did what they wanted in specific shots. Early look development of the tendrils | Image showed in presentation ©Storm Studio The team also had to do CG face replacements. Although all the infected had SFX makeup, in some shots, like real close-ups, the prosthetics still felt a bit too rubbery, necessitating CG enhancements or replacement. Of course, the team built upon the incredible work already done by the SFX department, which allowed to add a more realistic layer and disgusting appearance. In other instances, they had to “fill in the gaps”, adding tendrils, deteriorating teeth, etc. Finally, there were stunt doubles who needed their faces replaced with the actors’. "References are key to everything when you’re making photorealistic VFX", Nordhal emphasized. 3 BODY PROBLEM - PIXOMONDO Michael Schlesinger, Lead Compositor at Pixomondo in Germany, shared insights into the VFX work for "3 Body Problem". The variety of shots in this project ranged from subtle enhancements to a big a** atomic explosion in space. In this case, there were a lot of invisible VFX involved. A prominent example of this is that they had to add cold breath in 52 shots for the first episode, which was set in a cold environment. "It sounds simple, but you have to analyze how they talk and what they are doing to understand when the gas comes out of the mouth. It was all the more complicated, as they did not speak English!", Schlesinger explained. But the team also had to work on bigger full CG sequences, like a rocket launch in Cape Canaveral and the dream sequence with a paper boat, which was filmed against a blue backdrop with the actor. The origami boat then had to be replaced so that it had a more paper-like texture, in addition to creating the entire environment (water, sky, fog, etc.). Dream sequence | © 2024 Netflix, Inc. For the atomic bomb explosion in space, they took a different approach; they did it with compositing techniques. As mentioned before, referencing real footage is key, so Schlesinger was surprised to see that there is actual historical footage from "Operation Fishbowl", a 1962 operation led by the USA where nukes were launched 400 km into space.  However, they had to balance scientific accuracy with creative liberty to make the explosion visually compelling. “Scientists probably will see this and go like, what the? But we do it for the viewers, for the shot”, he joked. Another particularly tricky aspect when you’re doing VFX is depicting scale. For the Pixomondo team, it was one level up, as in space there are no familiar reference points. So, to convey how big the golden sail for the rocket was, the team used techniques like reducing contrast for distant objects and sharpening closer ones, mimicking atmospheric effects even though space has no atmosphere. This approach helped create a convincing sense of scale.   These conferences highlighted the meticulous work involved in VFX, showcasing the blend of technical skill and creativity required to bring fantastical or realistic elements to life on screen. But that’s not all we have in store from NIFFF this week. Stay tuned as we cover more exciting conferences and reveal our top five must-see movies from the festival. You won’t want to miss it!

  • Nimona's animation at NIFFF

    On the second day of the 2024 NIFFF Extended program, the focus pivoted from the digital VFX— showcased with "The Last of Us" and "3 Body Problem" —to the equally intricate worlds of 3D animation, with the movie Nimona, made by DNEG. ©Netflix The journey of bringing "Nimona" to the screen was anything but straightforward. Initially started by Blue Sky Studios, the project faced uncertainty when the studio closed in 2021. However, the passion and dedication of key individuals led to its revival at DNEG , where at its peak, 100 animators worked together to bring the graphic novel adaptation to life. Toby Seale, Lead Animator at DNEG, shared insights into their animation process, which started with… graphs. Yes. Given the project's temporary halt, the directors had the chance to deeply explore the character arcs and dynamics. They created graphs tracking their emotional journey throughout the movie, both individually and collectively.  This tool proved to be invaluable for directors and animators alike, allowing them to infuse this emotional subtext into the animation. Another important resource for the animators was the style guide created by Ted Ty, the Animation Director for "Nimona". They also worked with the concept of "golden poses" . What are those, you may ask? Well, these are still frames where the character's pose conveys the essential elements of a shot, guiding the viewer's eye to the most important aspects. This technique, borrowed from traditional 2D animation, ensures clarity and intention in each frame. It also enabled quick feedback cycles with directors, helping animators avoid overworking shots that might later be changed. Like VFX, references are crucial in animation , though more for acting than for textures. This involves the team filming themselves acting out scenes, often in a cartoony style, to better understand body and facial movements. Even subtle actions, such as holding a hand, are filmed to capture the intricacies of key real-life movements. For the animation of "Nimona," the team aimed for a theatrical, exaggerated style, often utilizing the entire frame and set. However, in moments of vulnerability, the animation shifted to a subtler, more natural approach. A striking example is a scene where Nimona, feeling exposed and honest, is depicted from the back in a very still position, to emphasize her emotional state. But she does more than that... "Nimona’s transformations were one of the biggest challenges for the animation team", confessed Seale. ©Netflix And we are not surprised. Turns out, they were tasked with creating her transformations entirely in 3D, using VFX only for enhancements. This technical challenge also had to align with Nimona's emotional arc. For instance, when she is playful, her shape-shifting is quick, fun and seamless. They achieved this by nesting the rigs of different animals inside one another, similar to Russian dolls. However, contrary to what many might believe, it’s not the larger, more dynamic movements that are most challenging. Minimal movements posed a greater challenge, as conveying emotion with slight gestures requires immense skill.

  • Backlash for Marvel’s new series opening credits made with AI

    As the credits rolled for the inaugural episode of Marvel's "Secret Invasion”, controversy sparked across the internet. For the first time, artificial intelligence had been used in a major production, causing serious backlash from critics, the industry and fans alike. But why? And what can history tell us about the use (and arrival) of new technologies in art? CREDIT: Marvel - "Secret Invasion" opening credits Now, this will be a lengthy article, but if AI is a topic that frightens or fascinates you, please bear with me… As artists working in the film field – with Joe being initially a CG artist, and me, being more from the writing side of the creative process – we found ourselves confronted with the possibility of extinction. With technologies like Wonder Dynamics, which integrates CG characters into real footage or Chat GPT, we've even wondered if we should start learning to make bread, or perhaps arepas. Since the first time AI crawled its way into our studio, a year ago when Midjourney was in beta testing phase, we have been on a rollercoaster of emotions. But instead of looking the other way, we started experimenting with many AI tools, and... good news! While some enable to create images or texts with ease – though not necessarily great – we realized it will not replace human creativity. AI is a new tool in a filmmaker’s arsenal, like a personal minion you can overwork and it'd be happy about it. But it's not a magic tool (a misconception many seem to have); it takes time, patience and a lot of editing. Plus, it needs to be trained or told what to do. This means the ideas (and the final product) are still coming from your little brain, which, actually is the scary part. But I digress… So, let's gain a little perspective here. When you look back, the history of art itself testifies to the fact that new technology can also engender new forms of creativity.* * note to our readers, this is about technology, not copyright - which deserves another article. WHEN PHOTOGRAPHY WAS INVENTED, PAINTERS WERE UPSET Rewind to the 19th century, when Nicéphore Niépce, a French pionner, inventend photography. It was 1822. The art world took a hit and was in turmoil, as traditional artists (mostly portrait and landscape painters) worried that this new mechanical piece of technology, able to capture reality as none of them could, would make them obsolete. Yet, instead of replacing traditional art, photography evolved into an art form of its own and pushed painters to explore new artistic styles like Impressionism , which celebrated the human interpretation of the world in a way a photography could not. DIGITAL ANIMATION VS. TRADITIONAL ANIMATION The introduction of digital animation in the 90s was similarly impactful. Prior to its introduction, animation was an intensive process, that took hours and required artists to draw each frame by hand. Digital animation significantly streamlined this process and allowed for new techniques and styles. Unsurprisingly, many traditional animators initially resisted this shift due to concerns about job security and the potential loss of the hand-drawn aesthetic. But don’t take it from us. Here’s a reaction from Disney animator, Aaron Blaise , to Corridor Crew’s manga AI short film “Rock, paper, scissors” . Similarly, when Jurassic Park was made, creatures were traditionally animated in stop motion. Phil Tippet was the Maestro of this specific, time-consuming art. Having worked in classic movies like King Kong, Star Wars and Robocop, it was only natural that he was part of Spielberg’s team. But then came Steve 'Spaz' Williams , who was so sure CG animation was the future, he worked his a** off to make a walk cycle for the T-rex. It was thought to be impossible, yet he did it. And… CG animated dinosaurs were used in the movie! When he saw it, Tippet got very sick and said to Spielberg that this "made him feel extinct" (line that was later used in the movie when archeology was in danger, by the way), but on the contrary! He had such a deep understanding of how creatures should move, that he guided the CG team and helped them understand movement, in order to animate the dinosaurs in a believable, life-like way. Instead of going extinct, he evolved. He was now the "Dinosaur Supervisor" of the movie. Images from " Jurassic Punk " trailer "AI WILL REPLACE US", "AI ARTISTS ARE NOT REAL ARTISTS" In our present day, we find ourselves at a crossroads again with AI-technology disturbing our status quo. Many see AI as a threat, while others, view it as a new tool to play with, to explore, to further the narrative of their work, to question even reality… that is what artists do. The controversy surrounding AI in art recalls the initial reactions to photography and animation. While it's crucial to acknowledge and address the issues of AI (ethical, moral, copyright, how it's going to be exploited by big corps, etc.), we should also recognize its potential as a new medium for artistic expression, creating new breeds of artists and pushing the boundaries on how we create things. Refik Anadol , who uses AI to create immersive installations, is a perfect example of this new kind of creative individuals. Here’s another interesting video made by the MoMa itself about AI in art. MARVEL'S AI OPENING CREDITS Now… don’t get us wrong. We are certainly not defending Marvel or any other big studio that have been notorious for mistreating their artists – hence the ongoing writers’ strike , or the complete and utter disrespect for VFX artists –. So, why did they do it? According to the Executive producer and Director of the series, Ali Selim, the decision to use AI for the opening credits was a conscious artistic choice, echoing the show's themes of alien infiltration and identity uncertainty. At least they are being open and direct about it. Now, was this done to cut costs? Well, that's (unfortunately) how the world works. Could this have been accomplished by a traditional artist (2d or 3d)? Certainly. Would it have had the same impact? Not sure. The sequence was made by Method Studio , an award-winning VFX studio, who told the Hollywood Reporter that " the production process was highly collaborative and iterative, with a dedicated focus on this specific application of an AI toolset. It involved a tremendous effort by talented art directors, animators (proficient in both 2D and 3D), artists, and developers, who employed conventional techniques to craft all the other aspects of the project. However, it is crucial to emphasize that while the AI component provided optimal results, AI is just one tool among the array of toolsets our artists used. No artists’ jobs were replaced by incorporating these new tools; instead, they complemented and assisted our creative teams ". In conclusion, the story of art is a story of evolution - of new technologies and techniques challenging and transforming the way we create and perceive art. It is sometimes scary, other times exciting... but one thing is sure, AI is out of the box and here to stay. So, how are we going to use it? ___ Alex Iwanoff

  • "Longlegs": How to make a haunting villain?

    "Longlegs" features Nicolas Cage in one of his most unsettling roles to date, serving as a perfect example of what makes a horror villain memorable. The kind that makes you uneasy, not because of what he does, but because of his mere presence. If you haven’t heard of it, "Longlegs" is one of the most anticipated horror movies of 2024, not only because it stars Nicolas Cage but also due to an incredible marketing campaign. The film, directed by Osgood Perkins, follows an FBI agent unraveling a series of clues to stop a serial killer known as Longlegs. It is beautifully shot; its cinematography is on point, as is the production design, pace, and soundtrack. But we’re here to talk about Longlegs, the character—the doll-making, intriguingly haunting villain. Without delving into spoilers, let's explore three techniques used by the director to ensure Cage's appearance in the movie is unforgettable. OBSCURED PRESENCE One of the film's most effective strategies was its marketing campaign. It didn’t reveal much, reminiscent of some internet horror mysteries or movies like " Skinamarink ", and maintained a very eerie vibe throughout. This approach felt fresh in an era where trailers often feel formulaic. Similarly, the way Cage's character was marketed enhanced the mystery and terror. Throughout the promotional campaign, Longlegs was not shown. We only knew it was portrayed by Cage, thanks to the credits, or we could hear him. Even in the final trailer, we couldn’t clearly see what he looked like. But one thing was certain: it instantly made you uncomfortable . This technique carried over into the film, where Cage’s presence throughout the beginning of the movie is obscured, keeping viewers on edge… until we can finally clearly see him and wonder: what is that? Why? How? All while appreciating his performance. We must say, we were not disappointed. And that, for us, was the most difficult thing to achieve: to live up to the expectations you’ve built up for a character or movie. “What you don't see is generally scarier than what you do see”, Steven Spielberg on Filming Jaws. Avoiding showing your antagonist, monster or villain is a very successful technique, used in many many creature features, such as "Alien", "Jaws" and even "Jurassic Park". It’s also commonly used in investigation thriller movies, which "Longlegs" initially pretends and markets to be (and don’t get us wrong, we liked the movie, but it quickly strays away from that genre. Expecting an investigation, we felt a tiny bit let down, but that’s for another article). For example, in "Se7en", you only get bits and glimpses of the killer, but you never know what he looks like until the very end. What you know is what he can  do, making him a monster in the viewer's mind . The same happened when the antagonist of "8MM" (also with Nicolas Cage) was revealed. In those movies, they created a monster in your head, only to reveal someone, well... “normal”. Which is very true to real life—a neighbor-next-door, John Gacy-esque kind of vibe. However, in Longlegs' case, he's anything but normal. He’s weird, strange and uncomfortable, making people long to see the movie. THE UNCANNY APPEARANCE Cage’s physical transformation for the role is a masterclass in utilizing the uncanny valley to evoke fear . Instead of relying on tropey elements for villains, such as facial scars, the creators went in another direction: plastic surgery—extensive, botched plastic surgery. Today, this feels particularly frightening. With the rise of the "Instagram-face" and celebrities looking drastically different from their younger selves, plastic surgery has reached new levels (if you don’t know what I mean, here’s a viral video on TikTok ). For Longlegs, it only added depth to the character. And that’s where it was successful, because villains often symbolize deeper societal fears , and Longlegs is no different. His altered face and bizarre behavior could represent anxieties about identity and the lengths to which people go to physically change themselves to make someone else like them. “He’s gone through all these plastic surgery botch jobs to make himself look as pretty as he can for [spoiler]”, says special effects makeup artist Harlow MacFarlane. Additionally, Cage’s character wears a white, pale makeup with kind of soft strawberry lips, giving him this ghostly near-to-death appearance (is he even alive?). The character is also dressed entirely in white, which is interesting because this shade is normally attributed to purity and sanctity— it's an eminently positive color . This creates a nice juxtaposition, making his presence all the more unsettling. EERIE BEHAVIOR AND SPEECH A villain's behavior and speech can often be more horrifying than their actions. In "Longlegs", these elements play a crucial role in creating the character. As with any other Cage movie, you go to see what he has to offer. Is it crazed Cage? Depressed Cage? But no, he’s something else entirely—something we have never seen him do before, and he does it beautifully (yes, we’ve liked him since the '90s!). It turns out this character is personal to Cage, as he drew from childhood memories of his mother, who lived with schizophrenia and severe depression, according to CBR. “She would talk in terms that were kind of poetry. I didn't know how else to describe it. I tried to put that in the Longlegs character because he's really a tragic entity,” said Cage. Cage also explained to Entertainment Weekly  that he sees Longlegs "as neither male nor female". He was influenced by the hermaphrodite character in Fellini's Juliet and the Spirits , emulating their "bizarre vocal shrieks and whatnot". With these inspirations, Cage created a character with mannerisms that are both erratic and deliberate, creating a sense of unpredictability. For us, he’s like a Willy Wonka from Hell (as if Wonka wasn’t scary enough already) mixed with The Joker, with his nice suit and almost whimsy appearance. In conclusion, Longlegs (the character) is a testament to the multifaceted nature of creating a scary villain. By combining an uncanny appearance, unsettling behavior and the strategic decision to obscure his presence, the character becomes a haunting figure that lingers in the minds of the audience long after the movie ends.

  • Brands that dared to use AI

    Whether it's for laughs, to prove that we're not yet (completely) outdated, or to integrate it into a workflow to produce a unique audiovisual piece, here are some advertisements made with the help of generative artificial intelligence (AI). But before we go on, what exactly is generative artificial intelligence? Well, it's "a branch of AI that aims to create models capable of generating original and realistic content, such as images, texts, music, videos, etc.", explains ChatGPT. Now that we've clarified that, let's get back to our examples: COCA-COLA - MASTERPIECE This is perhaps the most well-known advertisement to date that uses AI. With this ad, Coca-Cola made a big impact. It is ambitious and gives prominence to art, including special effects, while also incorporating AI into the process. The ad was produced by Electric Theatre . For the curious minds, here is the behind the scenes detailing the various steps required to reach the final product. SALESFORCE - ASK MORE OF AI Featuring actor Matthew McConaughey, The Mill employs generative AI to create a series of short but impactful ads. "The outcomes it [AI] produces are truly groundbreaking, presenting looks that have never been seen before. The Mill is excited to be on the forefront of the increasing utilization of AI in visual effects, as it opens new avenues for creative expression", they state on their website. HEINZ - A.I. KETCHUP The ketchup brand wasted no time. As soon as generative AI was introduced, they jumped on the opportunity to create an entertaining video, stating that even AI knows that the real Ketchup is Heinz. By generating a series of images with Dall-E, the brand shows that using just the one word "ketchup", the tool always displays a bottle with a sticker resembling the brand's. BECHERELLE - POSTERS Even though it's not a video, this advertisement deserves a spot on this list for its sheer ingenuity by the team behind the idea, Brain sonic . The concept is simple: given that AI uses prompts (texts that describe the desired image), one must know how to spell correctly. And what better than a Becherelle to avoid senseless images, albeit quite hilarious ones! © Brain Sonic FAKE CAR AD AI has a way of adding strangeness to our lives... So, to round off this list, here's one of the fake ads we have created with AI for a a brand new Van "Terra Guard 4x4", that will protect you from alien invasion and/or has enough space to take a ride with them. We had the idea, iterated some images with Midjourney, worked the script with Chat GPT, created the videos with Runway. The voice was made with ElevenLabs. It's potentially a fun way to create a previs for an outlandish idea. Would you like to have one? Contact us!

  • Coca-Cola AI christmas ad

    If you’re in the filmmaking or advertising industry, your social feed has probably been on fire over these three ads (you read that right. Not one... but three!):  Yes, Coca-Cola “remade” their iconic 1995 Christmas commercial—you know, the one with the red trucks and Santa—using generative AI. And it’s all everyone is talking about! According to LLLLITL , three AI studios collaborated on the ads, using most video generator tools we know of: Leonardo, Luma, Runway, Minimax, Kling and even Sora. Ad 1: " Unexpected Santa " - Studio: Wild Card Ad 2: "Secret Santa " - Studio: Secret Level Ad 3: " Silver Santa " - Studio: Silverside AI Now, Coca-Cola is a brand that is known for its innovative and strong marketing strategies (spending an average of $4 billion  annually on campaigns), but this time? Not everyone seems to be impressed. THE BACKLASH The response online has been harsh (or at least based on what my algorithm shows). Many say the ad feels off-putting and uninspired. It lacks the warmth and magic of the original, but does include some eerie visuals. It seems Coca-Cola’s attempt to inject "real" magic into their campaign has, ironically, stripped away all the real, instead highlighting the current limitations of generative AI. “These new AI-generated ads highlight the weaknesses and hard limitations of the current wave of video-generation models […] In short, it’s footage that isn’t too challenging for generative AI to produce”, Forbes . SURPRISE: PEOPLE ACTUALLY LIKE IT Despite the backlash, System1 —a platform that tests ad performance by comparing it to a massive bank of existing data to gauge emotional resonance—tells a very different story. According to Andrew Tindall, SVP at System1, regular audiences (non-marketers and everyday consumers) actually loved the ad . In fact, it outperformed previous campaigns in its “potential to convert in-market demands.” “I was wrong. This ad, which gave me a visceral reaction, just secured effectiveness gold”, Tindall on his opinion column . Interestingly, the ad’s success wasn’t tied to the fact that it was made with AI. Viewers weren’t even informed about how it was produced. Instead, the key takeaway is that Coca-Cola’s consistent holiday branding—the red trucks, Santa and nostalgia—continues to resonate. The method of creation didn’t matter; the emotional impact did. This raises an important question: Do audiences care about how  something is made or just how  it makes them feel? Familiarity breeds comfort and Coca-Cola has perfected the art of creating happy memories. “Consistent brands refresh the same memory structures again and again. It’s much easier to do this than to create a new memory”, Tindall explains. However, as he also points out, few brands can achieve this. Coca-Cola’s campaign has been building associations for 30 years. That enduring memory structure did the heavy lifting—not the idea and certainly not the AI. FRUSTRATING REALITY FOR (SOME) CREATIVES For industry creatives, though, these Coca Cola AI ads are kind of frustrating—not just because of what they represents (it's cheaper and faster!), but also because the brand has already demonstrated how to use it creatively. Their earlier Masterpiece  ad was a great example of how to use the tool in a fun and innovative way. This year’s one, by contrast, feels like generic content churn . It recalls other AI experiments, like the Toys“R”Us  ad, which was touted for its speed and efficiency but lacked any memorable charm. So, yes. AI is quickly becoming a dream tool for business executives. As marketing professor Marcus Collins explained in an interview with Today : “It’s a push for marketing efficiency. How do we create more with less? It’s just business 101” And from a business perspective, Coca-Cola’s ad is a success. It generated buzz, cost less to produce and allowed for rapid regional customizations, with over 45 versions for the campaign. On their YouTube (as of today), these AI ads have up to 150K+ views each, compared to just 12K+ for a live-action counterpart. So, unfortunately for us, math is not mathing. AI seems to be winning this round. AI IS NOT A MAGIC TRICK Now, we’ve said it before, and we’ll say it again: creating something with AI isn’t easy either. It requires countless iterations and collaboration with various types of creatives. It’s far from magic or the result of a “one-click” process. For instance, Secret Level revealed that they had to "generate over 18,000 images for 34 shots, totaling 85 minutes of content, created in under three weeks". To add a human touch, they even "cast real people as the likeness of their characters". Similarly, Silverside shared their own insights into the ad’s development. According to their website: “Originally, a project of this scale would have taken over 12 months, but with AI, our team brought it to life in just two months. Rendering 10,000 frames, creating 5,000 video segments, and involving over 40 creatives worldwide—including a live choir—this project pushed the limits of AI while highlighting the irreplaceable human touch”. Yet, despite all that effort... this ad? Still feels weird. THE FUTURE OF ADS... SORRY, CONTENT Ads are already everywhere, invading nearly every corner of our personal lives. Whether you’re reading an article, watching a video or scrolling through photos of friends, ads just pop up! So, shouldn’t we at least try to make them worth people’s time? This campaign, along with trends like AI influencers , proves that AI-generated ads are booming. Like it or not, they’re already reshaping the marketing landscape. They are faster and cheaper to produce—exactly what brands need to feed algorithms and stay “top-of-mind” with consumers. It’s clear we’re only going to see more of this type of “filler” content (and not just in advertisement!). So, it’s not really about the tool used to create the ad—it’s about the constant need to generate a steady flow of content to keep up with demand. And in doing so, are we sacrificing creativity for convenience? Anyway, we’ll leave you with Coca-Cola’s new AI Santa and wish you a Merry Christmas!

  • Lionsgate x Runway: a new AI-partnership

    Lionsgate , the Hollywood studio behind movies like John Wick  and American Psycho , has partnered with Runway , a leading company in generative AI technology. Announced in September 2024, this collaboration focuses on developing a custom generative AI model trained on Lionsgate's extensive film library, which includes over 20,000 titles. Tailored to fit the studio’s specific needs, they want to explore the use of AI throughout the filmmaking process—from storyboarding, to editing and post-production—with a clear goal of reducing costs. “Runway […] will help us utilize AI to develop cutting edge, capital efficient content creation opportunities”, Michael Burns, Lionsgate Vice Chair, official statement.  Tests made by Orbitae, using Gen-3 For those unfamiliar, Runway is a New York-based company specializing in generative AI tools that enable users to create and manipulate video content. Predating other notable AI video generators like Sora and Kling , Runway has recently released its latest algorithm, Gen-3, which delivers polished and consistent final outputs. But Runway’s capabilities also include tracking, ultra slow-motion effects and green screen functionalities, to mention a few. However, effective use of these tools requires numerous iterations and a new army of artists dedicated to exploring the algorithm’s limitations and advantages. Now, while this partnership marks a significant and official step in the use of AI by a major studio, it's important to note that AI’s inclusion in the filmmaking industry is not new . Here are a few recent examples to give you an idea: In 2017, 20th Century Fox started using machine learning to analyze movie trailers and determine audience preferences, as reported by The Verge.   In 2020, Warner Bros partnered with Cinelytic to leverage AI for box office predictions and marketing strategies. By 2023, Reuters reported that Disney was establishing a task force to explore AI applications across the company. That same year, Netflix sparked controversy with its $900,000 AI job postings. In May 2024, Sony's Chief Executive emphasized the studio’s strong focus on AI during an investor conference. On the other hand, Runway has also collaborated with major players in the creative software industry. Indeed, earlier this year, Adobe partnered with various generative AI tools — including Runway — to integrate AI technology into Premiere Pro, enhancing professional video workflows with features like generative fill, video editing capabilities and even B-roll generation. Meanwhile, as George Lucas reminded everyone at the Cannes Film Festival, machine learning (a.k.a. AI) has been a cornerstone in VFX for years. Now, Runway is positioning itself as a partner in this space, offering an different solution to create various types of effects—one of the aspects that attracted Lionsgate. In an interview with the Wall Street Journal (WSJ), Burns explained that the studio plans to use the new AI tools initially for internal purposes, like storyboarding or editing, and eventually for creating backgrounds and special effects, such as explosions, for their action-heavy films. “We do a lot of action movies, so we blow a lot of things up and that is one of the things Runway does”, Michael Burns, Lionsgate Vice Chair, for WSJ. While the Lionsgate-Runway collaboration might be forward-thinking—or perhaps FOMO-driven—as Burns also stated that the studio “could fall behind its competitors” if it didn’t act quickly, it comes at a time of increased scrutiny over the ethical use of AI . As a reminder, Runway has faced legal challenges related to using copyrighted material without permission to train its models. Coupled with the concerns raised during the 2023 SAG-AFTRA strike, this partnership has received strong mixed reactions. Did you know? Swiss film director Peter Luisi had his London premiere for The Last Screenwriter canceled due to backlash over the film being entirely written by AI. Tests made by Orbitae, using Gen-3 Despite the complex landscape where generative AI’s role in entertainment is both celebrated and feared, Lionsgate and Runway clarify that their goal is to enhance the creative process. As Cristóbal Valenzuela, Runway’s co-founder and CEO, said in the company’s official statement :  “The history of art is the history of technology, and these new models are part of our continuous efforts to build transformative mediums for artistic and creative expression; the best stories are yet to be told”​. Looking ahead, Runway is considering ways to license these generative AI models as templates for individual creators or studios interested in building and training their own proprietary models. In short, the Lionsgate-Runway partnership could be a landmark in the adoption of generative AI within the film industry. By developing a custom AI model, Lionsgate aims to stay ahead of competitors. As the industry watches closely, this collaboration might set the standard for how generative AI is integrated into the creative process without compromising artistic integrity (we hope). Whether this will lead to an increase in "content" (to use Burns words) that lacks substance, remains to be seen. Does this mean that we might see a generative AI category at the Oscars? Only time will tell. For now, grab your popcorn—not just for the movies, but for the drama unfolding before our eyes.

  • 8 Things You Didn’t Know About Mocap

    The Planet of the Apes  reboot films earned widespread praise for their lifelike CGI characters, but what’s the secret behind their believability? The answer lies in mocap , a technique that transfers an actor's performance directly to digital characters. Epitomized by Andy Serkis’ portrayal of Caesar, mocap has transformed how CGI characters are brought to life. Evolving from rotoscoping  to today’s performance capture , it’s a constantly advancing technique that continues to give us unforgettable characters. So, here are 8 things you might not know about mocap. Motion Capture (noun) : a technology for digitally recording specific movements of a person (such as an actor) and translating them into computer-animated images. Merriam Webster Dictionary 1. MOCAP VS PERFORMANCE CAPTURE At first glance, motion capture  (mocap) and performance capture seem similar, but they record different elements. While the first focuses only on body movements, the latter goes further by also tracking facial expressions, eye movements, hands, fingers and even the voice. This results in much more emotionally expressive digital characters. In films like Planet of the Apes , it’s the ability to capture every subtle expression of the actors that gives characters a truly lifelike presence, making them feel more human, and therefore, more relatable to the audience. 2. IT ALL STARTED WITH... In the late 90s, several directors gambled on motion capture technology to bring characters to life. Notable examples include the digi-doubles that populated the ship of Titanic (1997), Jar-Jar Binks in Star Wars: Episode I – The Phantom Menace  (1999), and Imhotep in The Mummy (1999), which used a blend of CGI and early mocap to animate the resurrected villain. However, the first real leap into motion capture as we know it came with Sinbad: Beyond the Veil of Mists  (2000), an Indian-American animated film. While it didn’t perform well upon release, it was the first feature to extensively use mocap, laying the groundwork for future films to adopt the technology. It also played a crucial role in developing the 3D optical capture techniques  that would later dominate the industry. However, it was in 2002, with Gollum  in The Lord of the Rings , that mocap truly became a game-changer. Now, actors could "be on set, outdoors, with other actors and in the moment”, as noted by IGN . From that point on, mocap evolved into the version we know today, where both physical and emotional performances can be captured simultaneously. Thanks to the constant evolution of this technology, we’ve been able to witness amazing characters and performances in movies, like Davy Jones and Thanos.   3. KING KONG: THE FIRST NON-HUMAN PERFORMANCE King Kong (2005) marked a pivotal moment for performance capture, becoming the first major use of both full-body and facial capture to create a completely non-human character. Unlike Gollum, who relied on dialogue to communicate emotion, King Kong had to convey everything through body language, facial expressions and animalistic sounds. “The facial motion capture began to come into existence at that point. We worked with 3D markers; I had about 132 markers all over my face and my eyelids. Kong was a real marriage of physical and facial capture”, Andy Serkis in an interview for Popular Mechanics. This advancement paved the way for many non-human characters that exhibited very human-like behaviors. Such is the case of Smaug , portrayed by Benedict Cumberbatch, the dragon from The Hobbit: The Desolation of Smaug  (2013), or Ted , incarnated by Seth MacFarlane, the mischievous teddy bear from Ted  (2012). 4. THE ANIMATORS’ HIDDEN WORK Performance capture isn’t a perfect, one-step process. Even after an actor’s movements and facial expressions are captured, the raw data often requires cleanup  to fix glitches and technical imperfections. These include smoothing out jerky motions, correcting marker misalignments, and refining any data distortions caused by equipment or environmental factors. Once the data is cleaned up, enhancement  takes place—animators work on adjusting the performance to better fit the character’s digital anatomy. For non-human characters like Caesar in Planet of the Apes , this step is crucial. Caesar’s facial structure is different from a human, so animators have to adapt the performance to make it feel natural while retaining the actor’s emotional nuances.   Blacksmith made by Orbitae for Winamax - process 5. MOCAP ISN’T CONSIDERED ANIMATION A surprising fact is that the Academy of Motion Picture Arts and Sciences  does not consider motion capture performances as “animated”. As the technology advanced, with films like The Polar Express  and Avatar  bringing performance capture into the mainstream, the Academy had to address this new style. In 2010 , they updated the rules, stating that “animation must figure in no less than 75 percent of the picture’s running time” and clarified that motion capture alone is not considered an animation technique​. This meant that films like Spielberg’s The Adventures of Tintin  (2011), though created entirely using 3D techniques and mocap, did not qualify as an animated film under Academy rules (which are still in effect for 2025 ).   More recently, this exclusion affected projects like Ishan Shukla’s Shirkoa: In Lies We Trus t (2024), a film made with Unreal Engine and using mocap for all the performances. Because of it, Shukla revealed that his movie was not eligible for the Annecy International Animation Film Festival , despite being visually what we would normally consider an animation movie.   This raises another debate: if motion capture is not considered animation, should actors like Andy Serkis be eligible for Best Actor nominations for their CG performances?   6. THE OLDER BROTHER: ROTOSCOPY Before motion capture, there was rotoscoping, an animation technique invented by Max Fleischer, the creator of Betty Boop . Rotoscoping added realism to animated characters by using live-action footage as a reference, where animators would trace over each frame to capture the movement. Disney famously used this method in films like Snow White and the Seven Dwarfs  (1937) to give characters more lifelike movement. 7. MARKERLESS MOCAP IS EMERGING While marker-based motion capture remains the industry standard, markerless mocap—which uses AI to track movement without special suits or markers—is gaining popularity. Tools like Move.ai  and Rokoko are making this technology more accessible to smaller studios and independent filmmakers by eliminating the need for expensive rigs and specialized equipment. However, as of today, the accessible versions still have limitations. They lack the precision of traditional systems, especially when capturing fine details like finger movements, complex facial expressions or intricate body interactions. As a result, animators are often required to do more cleanup and enhancements to smooth out the data and ensure accuracy. But it sure is a great tool to play with and make some tests, if you’re trying to do some mocap.  8. NOT FOR EVERY ACTOR “I have had to work completely differently than I ever have before”, Sigourney Weaver on her work on Avatar. Acting in mocap requires a unique combination of physical and emotional performance. Unlike traditional acting, where costumes and makeup help convey a character, mocap actors perform in special suits covered with markers. Depending on the movie, like Cameron's Avatar movies, they also often have to act in empty environments, with little to no set design. This type of acting demands not just a lot of imagination, but also huge amounts of training and incredible physical control to bring those fantastic characters to life. To achieve this, many mocap actors undergo training in disciplines like stunts , mime , or dance  to master their body movements and portray both human and non-human characters with precision.

bottom of page