top of page

66 results found with an empty search

  • What is the Fake OOH trend?

    The fashion world is no stranger to trends, but the latest—coined the "faux or fake 'ooh'" (FOOH)—has captivated social media, injecting awe and excitement into otherwise traditional ad campaigns. This phenomenon, which began with a viral Instagram post by French fashion house Jacquemus in 2023, has since become a creative tool across various sectors, including the film industry and brand marketing. FOOH made by Orbitae THE BIRTH OF THE FOOH In April 2023, Jacquemus launched an Instagram post that quickly went viral, generating buzz far beyond the fashion community. The post featured their iconic Bambino bags cruising through the streets of Paris as if they were cars. The visuals were so convincing that viewers questioned whether they were real or digitally manipulated—a perfect example of what would later be dubbed the "faux or fake ooh". The post, now iconic with 48.8 million views , was the work of 3D artist Ian Padgham , who uses CGI to create fun and improbable social media videos. In an interview with Paper , Padgham revealed that he pushes to have “carte blanche” on the creative process. Initially, Jacquemus wasn't certain about the tone, but ultimately, their decision to proceed with this playful concept proved to be very successful. So, they kept on doing it. Images from Jacquemus' Instagram Today, these kinds of videos are taking over social media, used by brands and the movie industry alike. This fusion of real video footage with CGI elements has captivated audiences and opened new possibilities for digital marketing. NOT YOUR REGULAR REEL FOOH is all about blending real-world environments with digitally created objects, producing scenes that straddle the line between reality and fantasy—achievements that would be nearly impossible in traditional marketing or standard OOH campaigns. The range of videos is now as creative and varied as possible. This also means that it allows you to put your product or campaign wherever you want. Have an event in Paris? Put the product near the Arc de Triomphe. Your movie premieres in a festival or a specific city? No need to plan complex logistics—create a stunning digital element in the real backdrop that fits the occasion. All in all, FOOH's flexibility reduces production costs and time, while maximizing the impact and reach of your campaign. This dynamic approach enables brands to craft memorable experiences that resonate with diverse audiences, making it a powerful tool in today’s digital-first marketing strategies.  To create these reels, Padgham outlines a meticulous three-step process. First, the footage is filmed in a way that allows for 3D space tracking, essential for inserting digital elements. Next, these CGI elements, such as the bags, are integrated into the footage with careful attention to lighting, shadows, reflections, and the physical space. Finally, hours are spent retouching each detail, ensuring that viewers can't easily detect the manipulation. “The reason these videos do so well is that probably half of the people who see them either think it's real or can't tell, and that's what drives engagement,” Padgham explains. MARKETING DECEIT OR KEEPING UP WITH THE TIMES? With the rise of FOOH, some have questioned whether this trend should have a place in the marketing landscape. Critics argue that these campaigns blur the line between reality and fantasy, potentially misleading audiences, thus hurting the brand. However, proponents view them as a legitimate artistic and marketing tool that leverages digital innovation. The controversy centers on whether such "faux" elements enhance creativity or undermine authenticity. Despite the debate, FOOH is here to stay. While some ads may make you look twice, the intention is not to deceive but to create something fun and surprising fro you to watch on your feed. Numbers back up the trend: a Medium article  reports that FOOH has a 20% higher recall rate compared to traditional OOH, and 74% of marketers have seen a spike in engagement with these digital strategies. In the end, it's still about the same thing: being memorable, all while allowing brands to showcase a different fun side of themselves. The possibilities are endless, and the impact is undeniable. That’s why at Orbitae, with our 15 years of expertise in 3D, we’re here to help you harness this fun and innovative approach. Let’s create something extraordinary together—something that makes your audience do a double-take and remember your brand. Contact us today to start your FOOH campaign!

  • From syrup to CGI: the evolution of blood in movies

    As September rolls around and the countdown to Halloween begins (yes, we are that kind of people), there’s no better time to dive into one of the season’s most iconic element: blood . Whether it’s the crimson splash in horror flicks or the arterial spray in action scenes, how blood is portrayed can greatly impact your film. But how has its depiction in movies evolved? A BRIEF HISTORY OF BLOOD IN FILM Despite our long-standing fascination with blood (and gore), in the early days of cinema, hemoglobin was rarely depicted in graphic detail. Violence was often implied through shadows and lighting rather than explicitly depicted. An example of this is the shower scene in Alfred Hitchcock’s Psycho (1960), where the stabbing is suggested, not shown. And the blood swirling down the drain? Chocolate syrup! This all changed with Herschell Gordon Lewis—known as the " Godfather of Gore "—and his film Blood Feast  in 1963. Lewis broke new ground by featuring graphic violence and copious amounts of blood, effectively launching the "splatter" genre and paving the way for more explicit depictions of violence on screen.  "I accept Psycho as a film that suggested what was to come later, but it wasn’t like Blood Feast where the tongue gets pulled out", the Godfather of Gore So, for much of cinema history, practical effects were the go-to method for creating blood on screen. SFX artists crafted various mixes, often using corn syrup and red dye as a base, to achieve the desired texture, color and viscosity. Techniques such as squibs (small explosive devices to simulate gunshots), air-filled tubes for spurts and splashes, and pressurized pumps for arterial sprays were essential tools in their toolkit. These methods are still used today, though with more modern and safer approaches (lucky us!). But sometimes, it's all trial and error—like when actor Harry Crosby was blinded for six months after the crimson mixture, made with "special ingredient" Kodak Photo Flo for realism, got into his eye during the shooting of Friday the 13th  (1980), as reported in the Netflix series The Movies That Made Us . It's the constant evolution of these techniques that has allowed filmmakers to use blood not just for realism, but as a storytelling device. For instance, Quentin Tarantino’s approach in Kill Bill  (2003) shows how blood can be more than just a gruesome detail —it can be a key part of the film's stylized, artistic vision. Drawing inspiration from the exaggerated, vibrant red blood sprays in the samurai film Lady Snowblood  (1973), the director opted for a more theatrical and impactful depiction rather than ultra-realism. “I’m really particular about the blood, so we’re using a mixture depending on the scenes. I say, ‘I don’t want horror movie blood, all right? I want Samurai blood”, Tarantino, IGN . However, going practical can present challenges; the messiness of multiple takes can make the process both time-consuming and costly. Sometimes, you only have one shot—one opportunity to get it right. This was the case with the emblematic elevator scene in The Shining  (1980). As Leon Vitali, Kubrick's personal assistant recalls in a Yahoo!   interview: "we spent weeks and weeks and weeks trying to get the quality and colour of the blood as natural as it could be. [...] And then, of course, there were the mechanics of it, because if you have that much pressure inside something like an elevator, it’s going to blow if you’re not careful". It was so stressful, that Kubrick himself left the set when it was time to shoot. Needless to say, the stakes were high. Today, this scene would likely be done with CGI, as seen in Spielberg's Ready Player One  (2018). THE RISE OF CGI BLOOD The advent of CGI revolutionized many aspects of filmmaking , including the depiction of blood. But it did not come without controversy. CG blood is often criticized for lacking the gritty authenticity of practical effects. However, whether you like it or not, CGI has undeniably enabled more creative, complicated and definitively gory scenes , ranging from ultra-realistic to more fantastical depictions. But why is it used so often now? Like practical effects, CGI blood is customizable, but it’s also often cheaper and saves tons of time on set—no need for clean-up! Most importantly, you can truly direct it. Similar to the logs in Final Destination 2 (2003), by doing it in post, you can exaggerate the effect, correct it or fine-tune it to perfectly match your vision. This flexibility is evident in films like Deadpool & Wolverine  (2024), where it is used extensively and extravagantly to amplify the violence. Or in 300  (2006), where the spurts of blood are deliberately exaggerated to mimic the comic book's style, amidst epic and complicated stunts. On the other hand, some films require a more subtle and realistic approach. For instance, Joker  (2019), which uses far more VFX than you might expect, incorporates CG in every scene where blood appears—such as the first murder on the train, the apartment fight and the climactic shooting of Robert De Niro's character. A lesser-known example occurs near the film's end, when the Joker stands atop a car and paints a bloody smile on his face—a moment achieved with CG to save time on Joaquin Phoenix's makeup. “That was one of the few moments where we did have help from CGI to create exactly the way we needed it to be”, Makeup artist Nicki Ledermann to Hollywood Reporter. Overall, CGI blood offers several advantages over practical effects. It provides flexibility during the shooting and allows directors to adjust the amount, color and trajectory of blood during post-production. However, it’s important to recognize that practical effects can also be finely tailored to match a film's tone, so the choice between the two often comes down to a director's preference and the specific needs of the film. THE MIX OF BOTH WORLDS In some cases, the best results come from blending practical with CG. This hybrid approach allows filmmakers to leverage the strengths of both techniques. For example, in The Boys series, real (fake) blood is frequently used, providing actors with something tangible to react to and giving VFX artists a reference on how it would behave. “It’s so important for the actors and the camera to experience it. It’s hard to explain it, it’s like a different feeling”, Stephan Fleet VFX supervisor on Corridor Crew . It is the combination of both techniques that sells the illusion, as practical effects provide the audience with a "ground truth" , something that feels real. When VFX is added to the mix, the illusion is fully completed. Extract from Corridor Crew's video - a mix of practical and CG In conclusion, whenever possible, give the audience (and actors) something tangible to connect with. That way, when the time comes for enhancements or full-CG effects, the illusion is already grounded in reality and packs an extra punch. And remember, if you ever find yourself in need of some blood splatter, Orbitae  has got you covered. We’ll help you get the best of both worlds.

  • Oscars 2024: VFX Nominees

    Everyone's talking about it. The 96th Oscars ceremony, set for March 10, 2024, has unveiled its nominations. But who are the nominees in the Best Visual Effects category? We tell you.   The Creator by Gareth Edwards This film made waves last year with the director's unconventional approach to staging a science fiction story on a limited budget ( read article ). It's no surprise then that it's among the Oscar VFX nominees. Instead of using green screens or motion capture suits, the effects, created by ILM (Industrial Light & Magic), were added directly onto the shots. "We shoot everything as if it was there (...). And then, edit the movie and when we were sure about what the shots were, that's when we designed the world", he said in an interview with AMD , adding: "essentially, we would design the science fiction on top". Source: ILM Facebook page   Godzilla Minus One by Takashi Yamazaki (WINNER!) For the first time, the King of Monsters is nominated for an Oscar! And what a joy that it's in its Japanese version! Indeed, this star has been appearing on our screens since 1954 and it's only 70 years later that it has finally been recognized. But let's get back to the point. The film was nominated for best visual effects, directed by Takashi Yamazaki (who was also the screenwriter and director). It has a total of 610 visual effects shots made by 35 artists, according to The Hollywoord reporter . In comparison, Top Gun: Maverick had 2400 and a budget ten times higher.   Guardians of the Galaxy: Vol 3 by James Gunn This third installment particularly touched the audience's heart with the backstory of Rocket (aka Rabbit). A broken past, scientific experiments, animal abuse, it's all there. Last year, we had the opportunity to see Nathan McConnel, animation supervisor at Framestore , and Stuart Bullen, VFX supervisor at RISE , involved in the film's effects at the NIFFF. And we learned a lot. For example, did you know that all the animations were done by hand? No motion capture, not even for Groot. In total, over 800 special effects artists participated in the film. The most difficult scene? The animal stampede at the end of the film when they are released. The adorable baby raccoons won everyone's heart. Tests conducted by Framestore.   Mission Impossible: Dead Reckoning Part 1 by Christopher McQuarrie With this film, McQuarrie continues the legacy of the series in terms of innovative special effects, combining cutting-edge technology and physical feats, especially with Tom Cruise, known for pushing the boundaries of the (im)possible. "McQuarrie stressed the importance of designing all VFX shots with “How would this be done practically" at the forefront of our minds. So we had to contemplate how one would realistically film and light a 150m long submarine at ocean depths ", explains Joel Green, effects supervisor at beloFX, who worked almost exclusively on the opening sequence , in an interview given for Art of VFX . The film required a total of 2640 shots with visual effects, reflecting the scale of the work done. Napoleon by Ridley Scott Here is another film promoted as having "no CGI" nominated to the Oscars for Best VFX (would you like to read an article about it? Let us know). Interviews given by Scott have even become memes in the industry. Like films such as Oppenheimer , Hollywood and the media often take a critical stance on the intensive use of visual effects, but what would these movies be without them? Although indeed 100 horses were used for the battle scenes, this number remains modest to give Napoleon its epic character. Therefore, considerable work was dedicated to extending the sets, adding horses and characters, as well as incorporating snow, boats and other scenic elements. Although invisible, CGI was essential in the making of this movie. It's what makes the illusion come to life. Meme that circulated on social media   Which one was your favorite movie? Which movie do you think should have been nominated for outstanding VFX? Let us know in the comments.

  • Sora, are we fu**?

    OpenAI made headlines again, this time with their new tool, Sora. A new text-to-video generator, that has created both excitement and concern across various sectors, including in the realm of filmmaking. Why? Because unlike its contemporaries, Sora seems to produce more photorealistic videos (and animated ones), with a lot of movement, in kind of a reliable way.   So, here is what you need to know and what we think about it. Prompt: Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee. SORA: A LEAP IN CONTENT CREATION At its core, Sora is another text-to-video model. However, per Open AI’s websit e, it “is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world”. Which, according to the videos published, is indeed quite impressive. Here’s a bit of info dump: Sora's videos are up to 60 seconds long in f ull HD 1920x1080. It is not yet available to the public. There is not a release date. It’s still under assessment for critical areas. Open AI will share the progress of their research on their website. You can see some of the videos they are generating over on their TikTok. FEARS AND TURMOIL Concerns have immediately surfaced about the implications of such technology regarding fraud, misinformation and other possible misuses (including copyright). According to OpenAI, they are “taking several important safety steps ahead of making Sora available”, and add: “we are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model”. Prompt: An extreme close-up of an gray-haired man with a beard in his 60s, he is deep in thought pondering the history of the universe as he sits at a cafe in Paris, his eyes focus on people offscreen as they walk as he sits mostly motionless, he is dressed in a wool coat suit coat with a button-down shirt , he wears a brown beret and glasses and has a very professorial appearance, and the end he offers a subtle closed-mouth smile as if he found the answer to the mystery of life, the lighting is very cinematic with the golden light and the Parisian streets and city in the background, depth of field, cinematic 35mm film. Although this is clearly important, this is not an article about that. This is about its impact on filmmaking, with some saying that we’re doomed. That it’s the “end for directors” or worse. New technologies have always sparked fear and apprehension, but history shows us that the introduction of new technology, while initially daunting, does not necessarily lead to the obsolescence of traditional skills and roles, but more to the rise of new set of skills. This is similar to when everyone was going to ( magically) become a photographer, because we all have an HQ camera in our pockets. It did not happen. The majority still takes crappy pictures of their food (no offense). Sora and similar tools are unlikely to replace the nuanced expertise of film directors and technicians . But they will certainly change the panorama, as they do offer more individuals new tools to bring their vision to life, potentially enriching the industry with a wider array of stories and perspectives. TECHNOLOGY AND CINEMA: AN ONGOING EVOLUTION Let’s not forget that the film industry has always thrived on technological innovation. Fom the invention of the camera itself to the use of CGI in creating visual effects, instead of matte paintings or stop motion, each advancement has brought changes, opening new avenues for creative expression . Sora, in this light, is but the latest chapter in filmmaking's ongoing evolution, offering tools that were the exclusive domain of Hollywood to a broader audience. Prompt: Borneo wildlife on the Kinabatangan River Stock footage as we know it, however, may become obsolete , although further testing is required to determine its integration into filmed scenes. We have tested numerous AI tools to assess if they're production ready. As of the publication date of this article, few have reached that stage, including Adobe Photoshop's AI generative tool, which we utilized to digitally demolish a large building (would you like to know how we did it? Let us know in the comments!). That's another thing... it will change how we make VFX (again!). So, let's brace! AI VIDEO GENERATORS, A NEW MEDIUM The rise of AI video generators, such as Sora, marks an exciting evolution in digital content creation and filmmaking. However, Sora is not the only player in this field. Google is also researching its own technology, called Lumiere , and Pika  emerges as a strong competitor to Runway . The latter has even introduced specific features like zoom in/out, pan left/right, alongside the traditional text and image prompts, which is indeed very cool. And these are just a few examples! These tools promise to democratize video production. Yet, our testing reveals a more complex reality. While they empower creators with new forms of expression, mastering these platforms often requires a blend of creativity, technical skill, and patience. Which means the rise of a new type of artists.  The allure of AI-assisted video creation is undeniable, yet it's accompanied by a learning curve and an inherent element of randomness that challenges the notion of 'effortless' content generation. It is not as magical as it seems. EMBRACING THE FUTURE AI is bringing forth a new era for ideas. It offers a new lens through which to view creativity. Before, you not only had to have the  idea, but also the know-how (how to paint light, chose colors, etc.). Now, it’s more about the lengths of your imagination and pushing it. This shift is what we find both exhilarating and intimidating at the same time. Indeed, with AI, anyone can create an aesthetically pleasing image, but it’s yet another one in an endless pool of content (which may lead to a potential 'standardization' of art). So, the real questions are: What are you going to ask it? How are you going to edit it and make it your own? How are you going to use it to push your own creativity? Prompt: A petri dish with a bamboo forest growing within it that has tiny red pandas running around. If you’re a filmmaker or an artist, just give it a go! There are hundreds of AI tools today (and not only video generators). See which ones can fit or enhance your workflow, adapt to your pipeline, streamline processes and even help you in exploring new ways to shape an idea . Use AI to visualize pitches, to help you write dialogue in a language in which you're not fluent, and discover fresh strategies for transferring ideas from your imagination to paper and, eventually, onto the screen. Thus, the mixed feelings of excitement and concern surrounding technologies like Sora are understandable. However, their true value and impact will be shaped by our choices as artists in how we adopt and integrate these tools. By viewing them as enhancements to human creativity and expertise rather than replacements (this also holds true for studios… cough cough ), the filmmaking industry is set to continue its evolution, crafting stories that continue to inspire and bring amazement.

  • "Dune: Part Two": how they did it?

    In the past, attempts to bring the story of Dune, by Frank Herbert, to the big screen were made, but without success. Jodorowsky , in the 1970s, envisioned and assembled a dream team to bring the story to life, but the project never received the green light. Then, in 1984, David Lynch's controversial version was released. In 2021, it was Denis Villeneuve's turn and, this time, it was a success. The second installment, released in early 2024, continues in this vein. It has a distinct, epic, and captivating look. But how did they do it? What technologies were used? Here are our top 3 techniques used to bring this monumental science fiction film to life. THE FREMEN'S BLUE EYES Changing eye color on screen is a well-known challenge, sometimes leading adaptations, like those of Harry Potter or Daenerys Targaryen, to deviate from the original descriptions. Contact lenses, impractical, and manual visual effects, costly in time and resources, limit these changes. For "Dune: Part Two", the DNEG team innovated with a more efficient solution. Unlike the first film, where manual addition of blue eyes was the norm, this sequel introduced artificial intelligence. They trained a machine learning model on shots from the first film, so the algorithm could automatically recognize and color human eyes blue. Although it required adjustments to avoid some errors, such as changing the eye color of non-Fremen characters and some minor touch-ups, this revolutionary method, described by Paul Lambert, VFX supervisor at DNEG , marks a significant advance in post-production techniques. IN PREVIS WE TRUST The use of the Unreal Engine tool was crucial for planning and producing the film. At the SXSW festival, a panel called " Dune Two, Real-Time Tech & the Implications for Storytelling " highlighted how integrating this technology brought the film to life, thanks to Previs. “I would encourage many people in my position to explore Unreal, to explore other pre-visualization techniques that can help you support your director as much as you can,” Jessica Darhammer, co-producer. According to Jessica Derhammer, co-producer of the movie, given the magnitude of the film and the added complexity of shooting in various locations, including the desert, there was a lot of prep involved. So, they had to align pretty early on the creative side with the logistics. The question quickly became, "practically, how are we going to shoot this in six months?". That's when they decided to use Unreal Engine to previsualize the sets and even the characters. Drones were also deployed to scout locations. The data was then imported into Unreal Engine, allowing them to work in advance on blocking, lighting, shadow areas, sunlight hours, angles, and much more. "You’re not making these decisions in a vacuum. You’re actually looking through the real camera lens and then you can pop out of that view and see what’s required of the scene around it; where can I position my lights? How many lights do I need? [...] And it really allows the filmmakers to all congregate and make informed decisions together that serve every individual department”, confirms Brian Frager from Epic Games. GLADIATOR SCENE ON HARKONNEN To capture the unique atmosphere of the Harkonnen planet, a specific infrared shooting technique was employed, transforming the images into black and white and giving the scenes an unreal and sinister aspect. The technique used relied on the infrared on the camera sensor, a method already exploited in other films like "Nope," to create the night effect, and even by Villeneuve himself for visual effects in other projects. In this case, the goal was to produce a feeling of scary unreality, where the characters' skin becomes almost translucent. This artistic decision, once made, was irreversible during shooting, highlighting the team's commitment to this particular aesthetic vision. As the director explained to IndieWire : " I had to warn the studio that there was no way back. It’s not an effect that we did in post-production " and adds, " I love the commitment and the risk of it". This method also posed a real challenge for the makeup and costume departments, requiring exhaustive tests to ensure the adequacy of colors and textures under the effect of infrared. The reactions of materials to specific light and heat conditions were unpredictable; even tattoos hidden under traditional makeup were revealed under infrared.

  • We tested various AI music generators

    We often discuss image and video generators , but let’s shift our focus to music generators — a burgeoning field in the AI landscape. With a plethora of options available, we decided it was time to put some of these tools to the test to gauge their evolution and what they currently offer. These generators operate by analyzing extensive datasets of music, learning from a variety of styles and compositions. Users can specify parameters such as genre, tempo and mood, guiding the AI to produce music that aligns with these preferences. As filmmakers and musicians ourselves —fact, check out our Spotify —, we find these tools interesting, as they help quickly sketch out song ideas or create simple tracks for social media videos or others. COPYRIGHTS AND LICENSING WITH AI-GENERATED MUSIC Now, before we go on, keep in mind that the rise of AI-generated music obviously brings up significant questions regarding copyrights. So, it’s essential for users to understand the terms and conditions of usage and licensing, as they can vary greatly between different services. Some platforms might retain rights to the music created, while others may offer complete ownership to the user. Some allow you to monetize the song, but only if you have paid plans. Always review the licensing agreement carefully to ensure that you retain the rights needed for your intended use, may it be ads, content creation or even short films and movies. TESTING AI MUSIC GENERATORS WITH THE SAME PROMPT To compare the capabilities and outputs of different AI music generators that you can try out right now, we used the same prompt across different platforms. This experiment highlighted the strengths and creative diversity of each tool. Prompt: “Make a grunge song, very passionate and depressed, the likes of Something in the Way. Make it about self-doubt, control and aliens” UDIO  - As the newest kid on the block, UDIO is breaking the internet. Despite being in its beta testing phase, the results it produces are quite impressive. It effectively captures the intended genre and the voices are convincing. It generates two songs per prompt, each approximately 30 seconds long, complete with lyrics. The songs can be extended, if needed.   Currently, the control over the output is limited; you can only adjust a few parameters such as prompts, tags, whether the lyrics are custom or generated, and if the composition should be instrumental. Lyrics generated I'm no stranger here (stranger here) Grasp at threads of faith, so sheer (Hey) darling, fight your own fight Don't trust your shadow In the alien night [Bridge] My heart's a quiet, grungy reverie Alien thoughts, they come to steer (come to steer) Whispers of doubt, what I most fear (Hey) darling, beneath the skin so tight Battles rage on in the mind's dim light [Chorus] In the vast unknown, I'm losing my hold Can't tell if I'm the captive or the patrol SUNO  – We first tested it last year and it seems to have had a nice evolution since then. Today, it generates two songs per prompt, each with two verses and a chorus. Additionally, it creates lyrics—which you can customize or you can put your own—and an accompanying image for your song. It’s user-friendly, though it lacks extensive parameters to control the final output, similar to UDIO. The voices are decent and it generates some intriguing ideas. Lyrics generated [Verse] In the darkest corners of my weary mind I find the shadows of my own disbelief Lost in a maze of self-doubt and despair I question if I'll ever break free [Verse 2] I feel a fire burning deep within my soul Yet it’s snuffed out by the hand of control Torn between the demons and the light I yearn for something more, something right [Chorus] This alien world pulls me down, down, down I'm lost in the depths, feeling it all around Shadows of the mind, they consume my being In this grunge symphony, I find my meaning SONAUTO  – This tool is quite straightforward; you can use a prompt or even a song as a reference, and that's it. It generates three songs per prompt, complete with lyrics. However, the quality leaves much to be desired—it's as if we're dealing with a band that needs more rehearsal. The "singers" require significant improvement and the overall sound is pretty messy. Despite these shortcomings, it does manage to create a song within the specified genre, and it's also fun to hear AI fail. :) Lyrics generated [Verse 1] Underneath the shadow where I hide Doubts like creatures creeping up inside Grasping for the truth beyond my reach Alien thoughts that no one dares to preach [Chorus] In this world, I’m lost, can’t find my way Like I’m controlled, just a pawn in their play Among the stars, whispers calling my name I'm just a soul, in their cosmic game [Verse 2] Echoes of a life that feels so strained Wrestling with the thoughts that keep me chained Searching for a sign in the night sky A glimmer of hope, or just another lie? [Chorus] BEATOVEN   – This tool is primarily designed to create moods and atmospheres rather than full songs. We conducted some tests with it a year ago and noticed little evolution in its capabilities since. While it offers several parameters to control aspects like emotions, instruments, tempo and genre, it hasn’t fully satisfied our requirements or lived up to our expectations. Disclaimer : they don't allow downloads unless you pay a subscription, so here's the one we created a year ago - as said, the concept is similar, more atmospheric than songs per se. SOUNDRAW  – This one is very different from the others. It doesn't require a text prompt; users simply set the length, tempo and genre. It generates—or rather, spurts out—a multitude of song ideas that sound more like MIDI tracks, serving as a foundational base for further creative development.   The tool also allows users to “shorten intros, rearrange choruses, and personalize your song structure”, as described on its website. Although it doesn't support adding vocals for this genre, our tests with trap music revealed it occasionally inserts brief 'hey' sounds—not full singing voices. We recommend keeping an eye on this tool if you're a musician. However, for filmmakers, it might not be the ideal choice. Disclaimer : they don't allow downloads unless you pay a subscription. SPECIAL MENTION : AIVA of Nvidia - it offers the ability to create specific and customized music. Unlike others, it doesn't accept text prompts. Instead, you can create a song based on a style, a chord progression, step-by-step adjustments, or musical influences.   It's more complex than the others, and we plan to explore it further. However, for the purposes of this article, it's not included in our main comparison. What we've observed so far is its limitation in recognizing the 'grunge' genre. It's also worth noting that this tool seems to be particularly suited for producers and musicians, rather than filmmakers or general content creators. We also explored other tools like Mubert and Stable Audio , though these didn't quite capture our interest or provide the fun results we hoped for. And there's still a wealth of AI technology out there to explore, such as Soundful and the upcoming MusicLM from Google. In conclusion, much like other types of AI generators, those that generate music can be both fun and useful. It’s essential that we harness them wisely to ensure that creativity flourishes without stagnation, while also respecting the rights of all creators involved. How we use these AI tools will significantly shape the future of art. What is certain is that these music generators will enable more people to explore their musical potential, paving the way for a new breed of artists to emerge.

  • To AI or not to AI?

    That is the question in the filmmaking industry right now (and probably everywhere else). As with any new technology, there's tension between innovation and tradition, and AI is no exception. In the filmmaking industry, although revolutionary, it's also sparking significant backlash ( we still remember the opening title sequence of Secret Invasion ). Filmmakers, actors and audiences are grappling with questions about the limits of AI's role in creative processes. Here are some of the most recent controversies that highlight the debate over AI's place in filmmaking. Where do you think is the limit? REMASTERING OLD MOVIES WITH AI Recently, AI technology has been used to remaster classic films in 4K resolution, including James Cameron’s “True Lies”, “Aliens” and “The Abyss”, receiving mixed reviews. Some say that having taken away the grain, among other pristine enhancements, makes everything feel less real, even a bit weird. Which raises questions about the balance between enhancing image quality and preserving the original aesthetic. However, this kind of backlash is not new. Indeed, in 1998 when “Titanic” was released on LaserDisc and VHS, significant work was done to erase imperfections from the negative. Yet some viewers objected, insisting that the original flaws, like scratches, should remain. Geoff Burdick, an executive at James Cameron’s Lightstorm Entertainment, told The New York Times  that: “There were a lot of folks who said, ‘This is not right! You’ve removed all of this stuff! If the negative is scratched, then we should see that scratch.’ People were really hard-core about it”. So, todays reaction came as no surprise to him. AI-GENERATED PROMOTIONAL MATERIAL A24's latest film, "Civil War", released AI-generated posters depicting chaotic scenes. Fans were quick to notice wonky details (such as a 3-door car), raising questions about the impact of AI on real artist who could have done it. Some even called it false advertising, as the images did not appear in the movie. However, a source told the Hollywood Reporter , that “the entire movie is a big ‘what if’ and so we wanted to continue that thought on social media— powerful imagery of iconic landmarks with that dystopian realism”, and that is why they did this campaign. ©A24 - Instagram page Last year, it was Disney who was accused of using AI to generate a poster to promote “Loki”, although the company later debunked it, according to Mashable . GENERATED ELEMENTS WITHIN MOVIES The horror film "Late Night with the Devil" faces backlash for using AI to generate three 1970s-style title cards. Some people on X called for boycott, others claimed that it starts with small things – like three title cards -, but ends up undercutting and underpaying artists. The writer-directors brothers Cairnes responded to the controversy by telling Variety that “in conjunction with our amazing graphics and production design team […], we experimented with AI for three still images which we edited further and ultimately appear as very brief interstitials in the film”. Left: movie poster / Right: AI card generated ©IFC Films and Shudder ​In another case, AI-generated posters appeared in an episode of "True Detective", sparking discussions about AI's use in background imagery and its impact on the series' authenticity, as Futurism reported.    Last year, it was Netflix Japan  who was under pressure after they announced on X they used background art generated with an AI for an animated short called “Dog and Boy”. AI GENERATED VOICES In the 2024 remake of "Road House," allegations arose that AI was used to recreate actors' voices during the 2023 SAG-AFTRA strike. According to Looper , R. Lance Hill, the original writer, filed a lawsuit against Amazon Studios and Metro-Goldwyn-Mayer, claiming AI was used for Automated Dialogue Replacement (ADR) to speed up production​. This raised concerns about using AI to replace actors' work (during a strike or not). However, a spokesperson for Amazon refuted the claims. You can also read our article about other voice cloning cases. On the other hand, AI has been used for positive purposes. In 2022, Fortune reported that Sonantic, an AI-based technology company, "masterfully restored" Val Kilmer's voice, which he lost after a two-year battle with throat cancer. However, Paramount clarified that this technology wasn't used in "Top Gun: Maverick", despite rumors to the contrary​ – but they also said the movie had Zero CGI, which… you know​. AI USE IN DOCUMENTARIES While some uses of AI in fictional movies may be acceptable (although not without controversy), it becomes a different story in documentary filmmaking, where authenticity is crucial. Netflix faces criticism for its documentary, "What Jennifer Did", which allegedly used AI-generated images without clear disclosure. Futurism were the first ones to point out the inconsistencies in the images that depict Jennifer Pan's "bubbly" personality. However, in an interview with The Toronto Star , executive producer Jeremy Grimaldi said: “The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source”. Thus, eluding if they used or not AI tools to modify it. Regardless of it, for those who have tinkered a bit with AI, the images do raise severe questions and the transparency of AI use (or absence of it), crosses a critical line into malpractice. SO, WHEN TO AI AND WHEN NOT? The question of whether to embrace AI in filmmaking or avoid it remains a hot topic. While AI has undoubtedly made some tasks easier, sometimes reducing the need for larger production teams, it can't replace human creativity and insight. The backlash against AI by the audience often stems from a lack of transparency or fear that technology will erode the artistic integrity that filmmakers and film lovers value. However, these reactions can drive filmmakers to use AI without full disclosure, leading to greater mistrust, akin to the “zero CGI” campaigns. So, when should AI be disclosed? In documentaries and other journalistic works, transparency seems crucial. But in fictional films, the line is less clear. Should we require studios to disclose every AI tool used and therefore how it was used? What about other softwares or even machines, like sewing ones? Seems a bit excessive. Ultimately, the debate over AI in filmmaking reflects a larger struggle between innovation and tradition. But did you know that AI has actually been part of the industry for a few years now? We only know about it now. Let us know in the comments if you'd like us to cover that topic too!

  • Noseless villains: when SFX meets VFX

    When you’re creating a villain, you better make him or her with features that are easily and immediately recognizable. This is important, as they need to be memorable . You can achieve this through the profile, the voice or a specific feature, like a scar or the absence of a nose. In fact, removing this central part of the face - a part we all take for granted-, makes the villain less human and closer to death, therefore more of a potential threat. That is why it is common in the villain arena to have noseless antagonists. But you guessed it, creating a character without a nose is no easy endeavor. To be able to make it believable, you have to blend the practical with the digital. It’s a perfect example of SFX with VFX. THE GHOUL - FALLOUT Walton Goggins is the actor that plays the Ghoul in the recently released series adaptation of the game Fallout . The SFX make-up, designed by Vincent Van Dyke and applied by Jake Garber, took around 5 hours to put on, including prosthetics and dentures, but they were able to narrow it down to 2 hours. In order to remove the nose, they painted a few white dots over it and it’s VFX Studio FutureWorks India, who stepped in to remove it in at least 500 shots , according to Looper . The actor said to Deadline that the transformation was “extremely anxiety provoking” at first, as he had to figure out how to act, express himself and talk with all these prosthetics on. VOLDEMORT - HARRY POTTER He-Who-Must-Not-Be-Named had a serpent like nose which was very hard to create. According to Shaune Harrison, key prosthetics Designer who worked on the Harry Potter & The Philosophers stone movie, the producers initially wanted the nose to be removed practically. “Even though we knew it was fairly impossible, we went ahead and sculpted a version which of course was rejected”, he describes on his website . Therefore, they opted to remove it digitally, adding tracking dots on the face, which proved to be incredibly hard. Left : prosthetic test | Middle: tracking dots | Right: final look with VFX In an interview with RatioTimes.com , Paul Franklin, the visual effects supervisor of the movie, said that Voldemort’s nose “had to be painstakingly edited out, frame by frame, over the whole film. And then the snake slits had to be added and tracked very carefully using dots put on his face for reference”, and added: “The art and time that goes into those nostrils should never be underestimated”. RED SKULL - CAPTAIN AMERICA Red Skull is such an important and recognizable character in the Marvel comics, that it was a great challenge to recreate him for a life-action movie. The beautiful 7-pieces silicon prosthetics applied in around 3 ½ hours by SFX make-up artist David White , were designed to make sure that the features of the actor, the one and only Hugo Weaving (LOTR, Matrix, etc.), were never lost. Then it was time for digital enhancement. “His nose had been simply left black by make-up, and we had to paint that out replacing it with a CG cavity complete with sinewy tissue in his sinus”, Fawkner, VFX Supervisor. Left : make up with tracking dots | Right: final look with VFX ©Walt Disney Company What seemed like a relatively simple brief of nose replacement, became more complicated than expected, as Jonathan Fawkner, Visual Effects Supervisor from Framestore, explains : “the mask is a beautiful piece of work, but, ultimately, it sat on top of [Hugo’s] face, with all that that entails. It bulged over his neck, over the back of the head, it had too prominent a chin in some shots […]. Hugo's performance pushed the mask into places which prosthetics couldn’t anticipate”. So, what did they do in addition to remove the nose? Well, in the end, they had to recreate a full 3D version of the head, among other things. Here’s the list: Red Skull VFX enhancements Replaced the nose with a CGI one Eyes: painted out the eyelashes darkened the sockets sunked the eyes a bit more Thinned down the lower lip so that it’s less fleshy Squared up the jaw Took out the gap between the teeth of the actor Gaunt up the cheeks Erased any crease or rolls that would normally form with the prosthetic while shooting (in the neck, for example) Sometimes they had to reduce the volume of Hugo’s head VECNA - STRANGER THINGS With Vecna, the Duffer brothers, directors of Stranger Things, wanted an iconic villain, akin to the Night King. So, it was only logical they contacted the man who brought to life the Game of Thrones villain: Barrie Gower. Inspired by the concept art made by Michael Mayer, the team at BGFX  made a full body cast of the actor, in order to sculpt and mold up to 25 different prosthetics. In total, it took around 8 hours for the team of make-up artists to apply and paint all the body, head and face appliances. “It was very clear from day one that we would work very closely with the VFX team”, explains Bower to Vanity Fair , in order to make enhancements, like the removal of the nose (painted black with white dots), but also the moving vines all over his body. In addition to that, although they sculpted and practically created Vecna’s left hand, it had to be completely replaced by a CG one, because “the on-set practical suit wasn’t enabling to have proper acting with it. So, every time you see this mutated hand, it’s the work of the animation team”, as explained by Julien Héry, VFX supervisor at Rodeo FX. See the full breakdown here .

  • Showrunner: The Netflix of AI

    This weekend, we saw a meme of a guy sitting on his couch, typing a prompt to create the kind of movie he wanted to watch. The caption read: “movies in 2027”. To be honest, we didn’t know how to feel about it. Little did we know that while having our coffee this morning, we’d find out it’s already happening! A new player has entered the ring, promising to revolutionize how we create and consume content: Showrunner. WHAT IS SHOWRUNNER? Showrunner is a text-to-episode system, an AI-powered platform designed to assist in the making of “AITV”, as described on their website . Created by The Simulation, the platform offers tools that leverage AI to help script, produce and even cast shows. The goal is to democratize content creation, making it accessible to a broader range of people who have stories to tell but may lack traditional resources. In fact, their target audience is people outside of the filmmaking industry—non-professionals. “It’s the Netflix of AI”, founder and CEO Edward Saatchi told Forbes . “Watch an episode, or make an episode” With this tool, users can create scenes and episodes lasting from 2 to 16 minutes by providing a short prompt. The platform features AI-generated dialogue, voices, editing, various shot types and consistent characters. However, as Saatchi told Theoretically Media , episodes are more episodic in nature for the moment, so you have to “think more like a sitcom where each episode is self-contained and less like an 8-season HBO epic”, although they are working on making it more consistent. Additionally, they are limited to specific styles: anime, 3D animation and cutout. Showrunner launched last Thursday with teasers for 10 shows already in development. Currently in an alpha program, the platform has a waitlist with over 50,000 people, according to their website. However, if you have a comedy series idea, you might get early access , as they are currently focusing on that genre.  INDUSTRY CHALLENGES The launch of Showrunner has generated significant buzz and turmoil in the filmmaking industry, which is still recovering from the writers' and actors' strikes and ongoing negotiations with IATSE, the union representing many of the crew members essential to film and television production. In addition to that, on the same day Showrunner was introduced, Sony Pictures Chief Executive, Tony Vinciquerra, announced at an investor conference in Japan that the company plans to explore using AI to produce films for theaters and television more efficiently, as reported by The Hollywood Reporter . This highlights a broader industry trend towards integrating AI into various aspects of film production , a trend that contributed to the recent strikes. But, as George Lucas told Brut during the Cannes Festival, the use of technology in filmmaking is not only inevitable but has been a staple for over 25 years. However, these disruptive technologies come with their fair share of pain. Echoing this sentiment, DreamWorks founder Jeffrey Katzenberg stated at a Bloomberg conference in November 2023 that AI would drastically change how animated movies are made, reducing the resources needed to just 10% of what was previously required. Showrunner exemplifies this potential. "In the good old days when I made an animated movie, it took 500 artists five years to make a world-class animated movie. I think it won’t take 10 percent of that", Katzenberg These developments, coupled with the recent wave of layoffs in the animation and VFX industries and the closing of several animation studios, paint a worrisome landscape for those who create content and entertainment. In short, the integration of AI presents both opportunities and significant challenges, as the industry grapples with the implications for traditional creative roles and job security. THE SILVER LINING While Showrunner arrives with a strong and innovative allure, much like Sora , its long-term impact remains uncertain. The platform has the potential to democratize content creation, yet it's clear that: AI alone cannot replace human creativity and originality. Not everyone is a good storyteller, which is why writing as a profession exists. A single prompt is like an idea. But to make it interesting (full script) is a whole 'nother story. So, initially, Showrunner may attract a lot of interest, but sustaining that interest will require more than just novel technology— it will need compelling, human-driven stories. When we think about it, the future of content creation can be summed up with a simple equation: AI replicates existing ideas + Hollywood’s fear of innovation = more generic movies to come, which is the root problem we are having right now. Or, in the father of Star Wars' words: “The stories they tell are just old movies. There’s no original thinking […]. Big studios don’t want new ideas, they don’t have the imagination to see something that isn’t there”. This suggests that we may see a rise in smaller studios creating incredible films more easily and cheaply, driven by audiences craving new and exciting stories rather than Hollywood’s endless sequels and prequels. This shift is already happening; for instance, the small studio behind Godzilla Minus One recently won an Oscar for VFX, outshining Hollywood giants. So, let’s be part of this revolution. Create your own shorts, series, and movies. Use AI as a tool to help you along the way. Don't give up. Keep creating.

  • What is a green screen?

    Despite the rise of Virtual Production , the green screen remains an indispensable tool in a filmmaker’s toolkit. This technology, known as chroma keying , allows directors to replace or extend the background of a scene, providing endless creative possibilities. Although you might think that chroma keys are particularly popular in genre movies such as fantasy and sci-fi, where imaginative settings and special effects are critical, they are actually a basic tool for every type of movie. Comedies, dramas, or period films like Peaky Blinders and The Crown  heavily use this technique too​. The advertisement and news sectors similarly rely on chroma keying. Chroma Key (by the Oxford English Dictionary) /ˈkrəʊməˌkiː/ A digital technique by which a block of a particular color (often blue or green) in a film or video image can be replaced by another color or image, enabling, for example, a weather forecaster to appear against a background of a computer-generated weather map. WHY IS IT SO IMPORTANT? Chroma keying is indispensable in filmmaking for several reasons: Versatility:  It enables the creation of seemingly any scene without the need for expensive sets or dangerous locations. It means you can either extend your set as needed and/or add new elements into the scene itself. Controlled Environment:  Filming in a studio with a green background saves time and money, as you are not dependent on weather conditions. However, green screens can also be used outside or on set. Cost Efficiency:  Filming with green screens can be more cost-effective than building physical sets or even a day in a virtual production cave. Creative Freedom : Directors can envision and execute scenes that would be impossible to achieve otherwise, allowing them to extend sets, create effects and even make people fly. Tip: Always have a VFX supervisor on set for proper lighting and technical setup. They handle unexpected changes on the spot, ensuring VFX artists can focus on creating the desired effects instead of spending more time in tasks like keying and refining edges​. If you don't have one, we're here to help. WHY GREEN? You probably noted that the green used for chroma key is kind of flashy and bright. The reasons behind it are that it’s not a shade usually used on other objects or clothing in the foreground and it’s the furthest color from skin tones. However, while green is the most common, other colors can also be used, depending on the specific needs of the scene. Here's the rundown: Green Screen:  The most versatile and widely used. Ideal for most scenes due to the high sensitivity of camera sensors to green, and the reasons mentioned before. On the downside, it has a lot of spill * and is not ideal for fine details or blonde hair. * Spill: When green light reflects onto actors or objects, creating unwanted green hues. This needs to be corrected in post-production to ensure a clean and accurate final image. Blue Screen:  Before green, blue was the industry standard for its cleaner mattes and sharpness around the edges. Today, it’s mostly used when the scene has green elements or when filming at night, as blue is less reflective, making it suitable for darker settings. However, it requires more lighting, which can affect the budget. Yellow Screen:  In this instance, it was not a fabric or a conventional screen, but rather the projection of sodium vapor lights onto a wall, which created the very specific yellow spectrum required. This technique was notably used by Walt Disney from the mid-1950s to the 1970s. Mary Poppins famously utilized it and won an Academy Award for Special Effects. The technology worked wonders even for translucent elements (which remains a challenge even by today's standards), but it required a prism to separate colors, a technology that is now considered lost—though Corridor Crew recreated it and were blown away by the results. Sand Screens (The Dune Case):  The specific chroma key tone was chosen primarily to seamlessly integrate actors into desert environments while preventing green or blue spillage onto them or other elements such as armor, visors, or any metallic or reflective objects. But how did it work?  It turns out that the opposite shade on the color wheel of the specific sand they used was... blue! This meant that when inverted, they effectively had a blue screen . To ensure its effectiveness, they conducted extensive testing before filming. OTHER USES OF GREEN IN FILMMAKING Green screens are not only used as static backgrounds but also in various dynamic and creative ways to achieve special effects in filmmaking. Actors or stunt performers wear green suits to become invisible in the frame, allowing filmmakers to create the illusion of floating objects or flying people, or to seamlessly integrate CGI characters into live-action scenes. Additionally, green props like balls or rods are used as placeholders for CGI elements, ensuring actors interact naturally with digital elements that will be added later​. For instance, in Shang-Chi, actors worked with a green cushion that vaguely resembled Morris, the six-legged winged furry pet with no eyes. CONCLUSION Since the inception of cinema, chroma keying has remained a pivotal tool for filmmakers, facilitating the creation of visually stunning worlds. Despite the rise of virtual production techniques, green screens continue to thrive due to their versatility, cost-effectiveness, and the creative freedom they offer. It is, in fact, not uncommon to incorporate a green background into LED screens for specific shots. Like any technology, the key lies in knowing when to employ it and when to explore alternatives. With ongoing technological advancements, including AI-assisted keying, the potential for this technology to enhance cinematic storytelling is expanding rapidly and makes it more accessible for indie filmmakers to play around with.

  • The Art of CGI Capes

    While everyone’s talking about the new Superman suit and debating whether it’s a good fit, if the color is too bright, or if they like the return of the red trunks or not, we thought we’d focus on capes… or more precisely, CGI capes. Man of Steel | © 2013 Warner Bros. Entertainment Inc. and Legendary Pictures Funding As you might know, capes are a defining feature for many iconic characters, adding dramatic flair and helping define their silhouettes. From Superman’s iconic red cape to Doctor Strange’s mystical Cloak of Levitation, this flat piece of fabric is an essential part of superhero lore (and other characters, such as Kings or Spartans). But translating capes from comic book pages to the big screen is no easy feat. Practical ones can pose real dangers for actors and performers, or haven’t you tried to put a blanket on and go for a spin on your bike? Trust us… it’s not a good idea. Despite this, capes remain a staple, thanks in large part to the magic of CGI . Real capes can be cumbersome and dangerous. They can easily get caught in machinery, doors, or underfoot, posing significant risks during stunts and action scenes. Additionally, they don’t always move as intended, leading to continuity issues and sometimes behaving unpredictably in wind or water. In the past, directors worked with shorter capes, used lighter fabrics to make them billow, or relied on other techniques to add drama to the scene. This was the case with Burton’s Batman, whose cape had an internal structure to give it the iconic bat-like shape. However, not all heroes needed such shape. Therefore, filmmakers increasingly relied on CGI to bring capes to life on screen. It seems to be in 1995 when a superhero first heralded a CG cape. It was for “Batman Forever”. Given the numerous scenes with elaborate stunts, they needed a better (and safer) way to add the cape. As explained in Befores and Afters : “The digital Batman, complete with cape, would ultimately be considered one of the first photoreal full-body digital stunt performers in a film, paving the way for so many synthetic superheroes to come”. This set a precedent for future superhero films, demonstrating how CGI capes can enhance storytelling and character depth. This new capability to direct how a cape moves made it possible to bring characters such as Spawn to the screen in 1997. This was particularly crucial for this antihero because his cape is a powerful, almost sentient part of his character, capable of morphing shapes, extending to great lengths, and providing both defense and offense. Therefore, a CGI cape was used in several sequences to accurately portray the aforementioned abilities. Another more recent example of a cape with emotions that needed more animation than just dramatic flow is Doctor Strange’s Cloak of Levitation. Spawn | © 1997 New Line Cinema Even when capes don’t have expressive qualities, rendering them in CGI allows for consistent, dramatic visuals , especially during action sequences and flight scenes. Superman's cape is a prime example of this. In Zack Snyder’s "Man of Steel", CGI was used to give the floor-length cape epic movements of its own, allowing it to billow heroically as he soared through the skies, creating memorable shots. This created a sense of grandeur and power that would be impossible with a practical cape. Similarly, Homelander's cape in "The Boys" is frequently rendered in CGI to ensure it moves in a specific way, enhancing his intimidating, menacing, yet dynamic presence. "Anytime  [Homelander]'s doing anything crazy like wires or flying or anything like that, we're gonna pull the cape and go CG. We want to control the physics of it when he's flying because that's a big tell for which way the wind's moving”, visual effects supervisor, Stephan Fleet, on Corridor Crew . Now, bear in mind that creating a CG cape is still not just “click and drop”. It involves several steps to achieve a realistic effect , such as creating a 3D model of the cape, adding cloth dynamics to ensure it moves realistically, and texturing it to look like actual fabric rather than a strange blob of color, just to mention a few. Depending on the shot, VFX artists either animate the cape or use advanced cloth simulations for it to move exactly as the director needs. Oh, and you'll probably need a digi-double too. This does not mean you need a CG cape all the time! On the contrary, as we always say, it's the mix between real props and CGI that creates the illusion . Chose your shots wisely and decide when and why to use the 3D one, rather than the real thing. PRO TIP: When incorporating a cape into your hero design, prioritize its role in the story. Decide if the cape should have expressive qualities or if it primarily serves an aesthetic purpose. Collaborate closely with your VFX supervisor early in the pre-production process to ensure the cape enhances your character’s presence throughout the movie. For expert guidance on when to use CGI versus practical effects, and to get comprehensive solutions, consider reaching out to Orbitae . We’re happy to help! So, in the end, and despite Edna Mode’s aversion to capes, 3D technology has given these iconic accessories a new lease on life in superhero films and beyond. Now that you know, the next time you see a superhero’s cape fluttering majestically on screen, remember that it is likely CGI bringing these legendary garments to life.

  • A movie made with Unreal Engine

    "This movie is about humans and where we are going. It’s about homo sapiens," stated Ishan Shukla at the Neuchâtel International Fantastic Film Festival ( NIFFF). "Schirkoa: In Lies We Trust" is an animated film that navigates the cyclical nature of human civilization——encompassing utopia, dystopia and a neutral point of view. Set in a world divided by extreme control and extreme freedom, Shukla’s narrative captures the essence of humanity's ongoing struggle. "Without going full circle, it’s impossible to understand how human beings build and destroy civilizations", Shukla explained. Unlike traditional animated films such as " Nimona ", "Schirkoa" was brought to life using Unreal Engine, a decision that significantly shaped its production. Here’s a look at how it was done. FROM CONCEPT TO SCREEN  The journey of "Schirkoa" began in 2011 with a graphic novel that the director never finished, as he soon asked himself: “Can I do an animated short film alone?”. Indeed, he could. The initial concept transformed then into a 13-minute short film , consisting of 31 sequences and 246 shots, crafted over four years. Initially, Shukla employed traditional animation tools, but the complexity of his idea proved daunting for a feature-length movie. This challenge led him to embrace Unreal Engine, very early on.  "Unreal Engine changed everything, providing live feedback and allowing adjustments on the spot", Shukla said and added: "being the lighter, cinematographer and director, the tool gave me a lot of liberty because I could change things until the very end”. UNREAL ENGINE: A GAME CHANGER FOR ANIMATION MOVIES For those unfamiliar with it, Unreal Engine is a 3D software tool primarily used for creating video games. Owing to its real-time rendering capabilities, it has also become a popular choice for filmmakers looking to craft their own animated films , including us , and for previs . For Shukla, adopting Unreal Engine was transformative, enabling him to bring his vision to life. He started by constructing the huge cities of Schirkoa and Konthaqa. Drawing inspiration from major cities like New York, the director aimed for a universally appealing design, blending elements from diverse cultures to forge a city that resonated with viewers globally. Because Unreal is a game engine, it meant that, after building his cities, Shukla could virtually explore and select precise locations for each scene. He then set dressed these areas, much like you would do with a real-life movie . The tool also allows the use of “multicam sessions”, enabling Shukla to actively manipulate camera angles and focals, while editting scenes in real-time. "Unreal is a superb pre-production tool. It lets you make a rough cut of your film directly within it, so you can feel how the narrative flows and then polish it more as you go along", Shukla. MOCAP AND CHARACTER DESIGN  Despite most characters in "Schirkoa" wearing paper bags on their heads, Shukla ensured they possessed distinct features, given that we could still see their eyes and lower jaws. The voices were recorded (and filmed) first, which gave the mocap actors cues on how to perform. This extensive process, conducted over a period of 14 days in a French studio, encompassed not only the performance capture for the main characters but also for the myriad of "extras" populating the virtual world. To achieve this, Shukla presented the actors with a variety of scenarios which they acted out for extended durations —ranging from simple tasks like fetching the morning paper to engaging in a heated discussion in a bar. This meant he had hundreds of hours of mocap he could use and play with to populate his city. To maintain realism, it was crucial to ensure real-world elements matched their virtual counterparts in Unreal Engine scenes. "You need to know the height of your table in the real world so that it corresponds with the one within the virtual environment”, Shukla explained. He also chose stunt performers and stage actors for their ability to deliver prolonged and dynamic performances, which proved invaluable in scenes like a bustling bar, where their unique actions brought every angle to life. Finally, mocap data was cleaned up and enhanced by a team in India, ensuring the animations remained smooth and jitter-free. CHALLENGES AND INNOVATIONS Ishan Shukla's pioneering adoption of Unreal Engine necessitated the creation of a custom pipeline and cheat sheets , ensuring smooth transitions between different software applications. Moreover, given that the tool was fairly unknow at the time he was in post-production, Shukla had to take on multiple roles to complete it. Looking to the future, Shukla acknowledges that the suitability of Unreal Engine will depend on the specific demands of each project . For "Schirkoa", with its extensive urban landscapes and many many characters, Unreal was a no-brainer. It enabled a degree of complexity and detail unachievable within the same budget and timeframe using traditional animation or live-action methods. Ultimately, Shukla's experience proves that with creativity, (a lot of) patience and a willingness to embrace new tools, there are now numerous ways to produce an animated film. His journey serves as an invitation to filmmakers to experiment and combine different tools, crafting a unique pipeline that best serves their narrative. At least, for us, it was very inspiring.

bottom of page