top of page

29 items found for ""

  • What is a green screen?

    Despite the rise of Virtual Production , the green screen remains an indispensable tool in a filmmaker’s toolkit. This technology, known as chroma keying , allows directors to replace or extend the background of a scene, providing endless creative possibilities. Although you might think that chroma keys are particularly popular in genre movies such as fantasy and sci-fi, where imaginative settings and special effects are critical, they are actually a basic tool for every type of movie. Comedies, dramas, or period films like Peaky Blinders and The Crown  heavily use this technique too​. The advertisement and news sectors similarly rely on chroma keying. Chroma Key (by the Oxford English Dictionary) /ˈkrəʊməˌkiː/ A digital technique by which a block of a particular color (often blue or green) in a film or video image can be replaced by another color or image, enabling, for example, a weather forecaster to appear against a background of a computer-generated weather map. WHY IS IT SO IMPORTANT? Chroma keying is indispensable in filmmaking for several reasons: Versatility:  It enables the creation of seemingly any scene without the need for expensive sets or dangerous locations. It means you can either extend your set as needed and/or add new elements into the scene itself. Controlled Environment:  Filming in a studio with a green background saves time and money, as you are not dependent on weather conditions. However, green screens can also be used outside or on set. Cost Efficiency:  Filming with green screens can be more cost-effective than building physical sets or even a day in a virtual production cave. Creative Freedom : Directors can envision and execute scenes that would be impossible to achieve otherwise, allowing them to extend sets, create effects and even make people fly. Tip: Always have a VFX supervisor on set for proper lighting and technical setup. They handle unexpected changes on the spot, ensuring VFX artists can focus on creating the desired effects instead of spending more time in tasks like keying and refining edges​. If you don't have one, we're here to help. WHY GREEN? You probably noted that the green used for chroma key is kind of flashy and bright. The reasons behind it are that it’s not a shade usually used on other objects or clothing in the foreground and it’s the furthest color from skin tones. However, while green is the most common, other colors can also be used, depending on the specific needs of the scene. Here's the rundown: Green Screen:  The most versatile and widely used. Ideal for most scenes due to the high sensitivity of camera sensors to green, and the reasons mentioned before. On the downside, it has a lot of spill * and is not ideal for fine details or blonde hair. * Spill: When green light reflects onto actors or objects, creating unwanted green hues. This needs to be corrected in post-production to ensure a clean and accurate final image. Blue Screen:  Before green, blue was the industry standard for its cleaner mattes and sharpness around the edges. Today, it’s mostly used when the scene has green elements or when filming at night, as blue is less reflective, making it suitable for darker settings. However, it requires more lighting, which can affect the budget. Yellow Screen:  In this instance, it was not a fabric or a conventional screen, but rather the projection of sodium vapor lights onto a wall, which created the very specific yellow spectrum required. This technique was notably used by Walt Disney from the mid-1950s to the 1970s. Mary Poppins famously utilized it and won an Academy Award for Special Effects. The technology worked wonders even for translucent elements (which remains a challenge even by today's standards), but it required a prism to separate colors, a technology that is now considered lost—though Corridor Crew recreated it and were blown away by the results. Sand Screens (The Dune Case):  The specific chroma key tone was chosen primarily to seamlessly integrate actors into desert environments while preventing green or blue spillage onto them or other elements such as armor, visors, or any metallic or reflective objects. But how did it work?  It turns out that the opposite shade on the color wheel of the specific sand they used was... blue! This meant that when inverted, they effectively had a blue screen . To ensure its effectiveness, they conducted extensive testing before filming. OTHER USES OF GREEN IN FILMMAKING Green screens are not only used as static backgrounds but also in various dynamic and creative ways to achieve special effects in filmmaking. Actors or stunt performers wear green suits to become invisible in the frame, allowing filmmakers to create the illusion of floating objects or flying people, or to seamlessly integrate CGI characters into live-action scenes. Additionally, green props like balls or rods are used as placeholders for CGI elements, ensuring actors interact naturally with digital elements that will be added later​. For instance, in Shang-Chi, actors worked with a green cushion that vaguely resembled Morris, the six-legged winged furry pet with no eyes. CONCLUSION Since the inception of cinema, chroma keying has remained a pivotal tool for filmmakers, facilitating the creation of visually stunning worlds. Despite the rise of virtual production techniques, green screens continue to thrive due to their versatility, cost-effectiveness, and the creative freedom they offer. It is, in fact, not uncommon to incorporate a green background into LED screens for specific shots. Like any technology, the key lies in knowing when to employ it and when to explore alternatives. With ongoing technological advancements, including AI-assisted keying, the potential for this technology to enhance cinematic storytelling is expanding rapidly and makes it more accessible for indie filmmakers to play around with.

  • Showrunner: The Netflix of AI

    This weekend, we saw a meme of a guy sitting on his couch, typing a prompt to create the kind of movie he wanted to watch. The caption read: “movies in 2027”. To be honest, we didn’t know how to feel about it. Little did we know that while having our coffee this morning, we’d find out it’s already happening! A new player has entered the ring, promising to revolutionize how we create and consume content: Showrunner. WHAT IS SHOWRUNNER? Showrunner is a text-to-episode system, an AI-powered platform designed to assist in the making of “AITV”, as described on their website. Created by The Simulation, the platform offers tools that leverage AI to help script, produce and even cast shows. The goal is to democratize content creation, making it accessible to a broader range of people who have stories to tell but may lack traditional resources. In fact, their target audience is people outside of the filmmaking industry—non-professionals. “It’s the Netflix of AI”, founder and CEO Edward Saatchi told Forbes. “Watch an episode, or make an episode” With this tool, users can create scenes and episodes lasting from 2 to 16 minutes by providing a short prompt. The platform features AI-generated dialogue, voices, editing, various shot types and consistent characters. However, as Saatchi told Theoretically Media, episodes are more episodic in nature for the moment, so you have to “think more like a sitcom where each episode is self-contained and less like an 8-season HBO epic”, although they are working on making it more consistent. Additionally, they are limited to specific styles: anime, 3D animation and cutout. Showrunner launched last Thursday with teasers for 10 shows already in development. Currently in an alpha program, the platform has a waitlist with over 50,000 people, according to their website. However, if you have a comedy series idea, you might get early access, as they are currently focusing on that genre. INDUSTRY CHALLENGES The launch of Showrunner has generated significant buzz and turmoil in the filmmaking industry, which is still recovering from the writers' and actors' strikes and ongoing negotiations with IATSE, the union representing many of the crew members essential to film and television production. In addition to that, on the same day Showrunner was introduced, Sony Pictures Chief Executive, Tony Vinciquerra, announced at an investor conference in Japan that the company plans to explore using AI to produce films for theaters and television more efficiently, as reported by The Hollywood Reporter. This highlights a broader industry trend towards integrating AI into various aspects of film production, a trend that contributed to the recent strikes. But, as George Lucas told Brut during the Cannes Festival, the use of technology in filmmaking is not only inevitable but has been a staple for over 25 years. However, these disruptive technologies come with their fair share of pain. Echoing this sentiment, DreamWorks founder Jeffrey Katzenberg stated at a Bloomberg conference in November 2023 that AI would drastically change how animated movies are made, reducing the resources needed to just 10% of what was previously required. Showrunner exemplifies this potential. "In the good old days when I made an animated movie, it took 500 artists five years to make a world-class animated movie. I think it won’t take 10 percent of that", Katzenberg These developments, coupled with the recent wave of layoffs in the animation and VFX industries and the closing of several animation studios, paint a worrisome landscape for those who create content and entertainment. In short, the integration of AI presents both opportunities and significant challenges, as the industry grapples with the implications for traditional creative roles and job security. THE SILVER LINING While Showrunner arrives with a strong and innovative allure, much like Sora, its long-term impact remains uncertain. The platform has the potential to democratize content creation, yet it's clear that: AI alone cannot replace human creativity and originality. Not everyone is a good storyteller, which is why writing as a profession exists. A single prompt is like an idea. But to make it interesting (full script) is a whole 'nother story. So, initially, Showrunner may attract a lot of interest, but sustaining that interest will require more than just novel technology—it will need compelling, human-driven stories. When we think about it, the future of content creation can be summed up with a simple equation: AI replicates existing ideas + Hollywood’s fear of innovation = more generic movies to come, which is the root problem we are having right now. Or, in the father of Star Wars' words: “the stories they tell are just old movies. There’s no original thinking […]. Big studios don’t want new ideas, they don’t have the imagination to see something that isn’t there”. This suggests that we may see a rise in smaller studios creating incredible films more easily and cheaply, driven by audiences craving new and exciting stories rather than Hollywood’s endless sequels and prequels. This shift is already happening; for instance, the small studio behind Godzilla Minus One recently won an Oscar for VFX, outshining Hollywood giants. So, let’s be part of this revolution. Create your own shorts, series, and movies. Use AI as a tool to help you along the way. Don't give up. Keep creating.

  • The Legal Battles of AI: the voice cloning cases

    Today, you can ask AI to create images, videos and even music. But you can also use AI-generated voices or clone existing ones for many purposes. With tools like ElevenLabs (our favorite), users can alter, clone and dub voices with impressive accuracy. This technology is increasingly used in various fields, from film narration and translation, to customer service and more. However, its rapid development raises significant ethical and legal concerns. Here are some of the hottest controversies. SCARLETT JOHANSSON VS. OPEN AI Scarlett Johansson may be lawyering up against OpenAI over concerns about the unauthorized use of her voice for their AI personal assistant, Sky. According to a statement given by the actress to NPR, Sam Altman, CEO of OpenAI, reached out to her nine months ago to ask her to voice Sky, as it would be “comforting to people” since they are already familiar with her voice as an AI in the movie “Her”. But she refused. To her surprise, when the assistant was launched, the voice sounded eerily like hers. To add insult to injury, Altman posted on X: "Her," leading many to believe there was foul play. However, OpenAI explained later in a blog post how they chose the voice. Nonetheless, Johansson’s legal team argues that the technology poses a significant threat to performers' control over their own voices. This situation could set a precedent for how AI-generated content is regulated and the extent to which individuals can protect their vocal likenesses. "In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity", Johansson told NPR. ACTORS ACTUALLY SUE AN AI COMPANY In the USA, two actors are suing the AI voice generation company LOVO for unauthorized use of their voices, as french lawyer, Betty Jeulin, shared on her Linkedin. In the lawsuit, two actors allege that their voices were cloned and used without consent. Both were hired via Fiverr before the AI advent (2019 and 2020) for purposes like "academic research in voice synthesis" and "radio ad script tests for internal use". Unfortunately, one of them discovered his cloned voice was used in 2022 to promote Russian military equipment on YouTube and in a 2023 podcast about AI dangers (ironically). The other found her voice and image in a 2023 promotional video by LOVO, showcasing their AI voice cloning technology to investors on YouTube. They were paid $1,200 and $400, respectively. The New York court will evaluate multiple legal issues, including possible breaches of SAG-AFTRA's rules on AI usage, highlighting important concerns about consent and fair compensation in the growing AI voice technology industry. THE CORTANA SWISS-GERMAN VOICE CASE Another interesting case involves a Swiss-German artist, Helena Hallberg, who voiced Cortana for Microsoft in that specific language. In her TikTok video, she expresses disbelief upon learning that her voice was sold (cloned) to other platforms by the company, all for just $3,000. This incident highlights the growing concern among voice professionals and other artists about the lack of control and potential exploitation enabled by AI voice cloning technology. This, mixed with the unclear legal framework surrounding AI uses, leaves artists at a loss when situations like these happen.

  • Apple “Crushes” Creativity

    Apple recently faced significant backlash over its iPad Pro "Crush" advertisement, prompting an apology from the tech giant and opening discussions about human creativity in a world increasingly influenced by AI. Here's what happened. THE AD AND ITS CONTROVERSY On May 7, Apple’s CEO Tim Cook published the ad on his X account. As of today, the post has over 60 million views, but the comments are overwhelmingly negative. To promote their thinnest iPad, Apple decided to crush a myriad of artistic symbols—such as paint cans, musical instruments and cameras—with an industrial hydraulic press. The backlash was immediate, forcing Apple to issue an apology just a few days later and acknowledge that the ad "missed the mark", as reported by CNN. Many internet users were also quick to compare this ad with LG’s 2008 one, where various instruments were equally destroyed in a vertical press to create a mobile phone. As a result, Apple lost even more points for lack of originality. IN CONTRAST WITH THE 1984 AD Apple's "1984" Super Bowl ad is one of the brand's most famous commercials. Directed by Ridley Scott, it depicted a dystopian future inspired by George Orwell's novel "Nineteen Eighty-Four". The ad showed an athletic heroine destroying a screen portraying Big Brother, symbolizing IBM, to introduce the Macintosh computer. Despite initial controversy and almost being pulled, it set a new standard in advertising and solidified Apple's image as an innovative disruptor, as described in Mental Floss. The "1984" ad contrasts sharply with today's ads. The company's early advertisements, including "Think Different", celebrated individuality, creativity and breaking away from the norm. These ads were not just about the products but about a vision of technology as a liberating force, a way to enhance creativity. Over the years, Apple's focus has shifted to highlighting the design and functionality of its products, often with simpler, more direct messages, that, in this case, did not land. SAMSUNG'S SWIFT RESPONSE Capitalizing on Apple's stumble, Samsung released its own ad just a week later, trolling Apple's blunder and highlighting Samsung's commitment to originality and creativity. Executive Creative Director from advertising agency BBH USA, Estefanio Holtz, said in a statement to CBS News: "It's about humanity, and the tablet is just a tool that helps her [the guitar player] play the notes. We went in the opposite direction to remind people, as we go through technological innovations, that we cannot leave humanity behind". The response, however, was met with mixed reviews. TECHNOLOGY AND CREATIVITY The controversy sparks a broader conversation about the role of technology in creative industries. As technology advances, including AI, concerns grow over the authenticity and originality of creative work. The backlash against Apple's ad reflects a fear that AI-driven or derivative creativity may undermine genuine human exploration and creation. In a time when AI can generate art, music, and even advertisements, the emphasis on originality and ethical creativity becomes even more critical. Technological tools have always been a double-edged sword in the creative process. On one hand, they enable unprecedented levels of creativity and productivity. Software for graphic design, video editing and music production allows creators to push the boundaries of their fields. On the other hand, the accessibility of these tools can lead to homogenization, where the unique touch of human creativity is overshadowed by templated, algorithm-generated content. In this case and in today's panorama, Apple's "Crush" ad pushed (or crushed) the wrong buttons.

  • AI’s not a Magic Trick, neither is Sora. The Air Head case.

    A few weeks ago, we published an article about Sora, the new OpenAI's text-to-video generator, wondering if (and how) it might revolutionize filmmaking. The tool was subsequently tested by several artists and filmmakers, whose short films gained traction and stirred up curiosity, but also doubts in the minds of many creatives. But is AI as easy as it seems? The short answer is: no, as we'll see with the Air Head case, a Sora short film. As of today, AI is a tool that recreates things (images, videos, text). While anyone can quickly generate an image of a bunny under a rainbow in Dali's style, the standout AI art comes from a new breed of artists who invest time mastering these ever-evolving tools. They experiment with prompts, iterate repeatedly, create new workflows, and try new approaches. More often than not, the output is refined or post-produced using "traditional" tools to make the final result cohesive. "It’s not as easy as just a magic trick: type something in and get exactly what you were hoping for," Sydney Leeder, Shy Kids producer, about Sora. THE AIR HEAD VIDEO CASE A prime example is Sora's short film that went viral, Air Head. Created by the Toronto-based group Shy Kids, it features a man named Sonny with a yellow balloon for a head. The film's concept got the attention of thousands partly because it was promoted as a showcase of Sora's imaginative content generation. And, yes, it is amazing. Today, it’s often cited as a prime example of “what AI can do in video”, but is it really only AI? Again, the short answer is: no. First and foremost, the filmmakers at Shy Kids were the ones who came up with this great idea. In order to make it a reality, they had to test various prompts and create many iterations of scenes to find a few that worked. In an in-depth interview with FXGuide, Patrick Cederberg, Shy Kids' animation and post-production director, discussed their experience using Sora. He noted that hundreds of generations were produced, saying, "my math is bad, but I would guess probably 300:1 in terms of the amount of source material to what ended up in the final." He also explained that, on average, rendering a 3 to 20-second clip took around 10 to 20 minutes. While Sora can render up to 720p, they chose to work "at 480 for speed and then upright using Topaz", another AI tool that upscales video resolution. Despite Sora's capabilities, the scenes generated also required extensive post-production work. They faced issues like maintaining the balloon's color and shape across scenes, and had to remove unwanted artifacts like faces embedded in the balloon. "What you end up seeing took work, time, and human hands to get it semi-consistent, through curation, scriptwriting, editing, voiceover, music, sound design, color correction... all the usual post-production stuff", Cederberg explains in the BTS video. So, while the technology enabled the filmmakers to generate surreal short clips quickly (which is very exciting), it still required manual intervention to achieve the complete vision. This shows that tools like Sora aren't a magic bullet for seamless and original art. Instead, they complement traditional techniques and artists. As Sydney Leeder noted, "using Sora definitely opens up a lot more possibilities, especially with indie film crews working on low-budget projects".

  • Noseless villains: when SFX meets VFX

    When you’re creating a villain, you better make him or her with features that are easily and immediately recognizable. This is important, as they need to be memorable. You can achieve this through the profile, the voice or a specific feature, like a scar or the absence of a nose. In fact, removing this central part of the face - a part we all take for granted-, makes the villain less human and closer to death, therefore more of a potential threat. That is why it is common in the villain arena to have noseless antagonists. But you guessed it, creating a character without a nose is no easy endeavor. To be able to make it believable, you have to blend the practical with the digital. It’s a perfect example of SFX with VFX. THE GHOUL - FALLOUT Walton Goggins is the actor that plays the Ghoul in the recently released series adaptation of the game Fallout. The SFX make-up, designed by Vincent Van Dyke and applied by Jake Garber, took around 5 hours to put on, including prosthetics and dentures, but they were able to narrow it down to 2 hours. In order to remove the nose, they painted a few white dots over it and it’s VFX Studio FutureWorks India, who stepped in to remove it in at least 500 shots, according to Looper. The actor said to Deadline that the transformation was “extremely anxiety provoking” at first, as he had to figure out how to act, express himself and talk with all these prosthetics on. VOLDEMORT - HARRY POTTER He-Who-Must-Not-Be-Named had a serpent like nose which was very hard to create. According to Shaune Harrison, key prosthetics Designer who worked on the Harry Potter & The Philosophers stone movie, the producers initially wanted the nose to be removed practically. “Even though we knew it was fairly impossible, we went ahead and sculpted a version which of course was rejected”, he describes on his website. Therefore, they opted to remove it digitally, adding tracking dots on the face, which proved to be incredibly hard. In an interview with, Paul Franklin, the visual effects supervisor of the movie, said that Voldemort’s nose “had to be painstakingly edited out, frame by frame, over the whole film. And then the snake slits had to be added and tracked very carefully using dots put on his face for reference”, and added: “The art and time that goes into those nostrils should never be underestimated”. RED SKULL - CAPTAIN AMERICA Red Skull is such an important and recognizable character in the Marvel comics, that it was a great challenge to recreate him for a life-action movie. The beautiful 7-pieces silicon prosthetics applied in around 3 ½ hours by SFX make-up artist David White, were designed to make sure that the features of the actor, the one and only Hugo Weaving (LOTR, Matrix, etc.), were never lost. Then it was time for digital enhancement. “His nose had been simply left black by make-up, and we had to paint that out replacing it with a CG cavity complete with sinewy tissue in his sinus”, Fawkner, VFX Supervisor. Left: make up with tracking dots | Right: final look with VFX ©Walt Disney Company What seemed like a relatively simple brief of nose replacement, became more complicated than expected, as Jonathan Fawkner, Visual Effects Supervisor from Framestore, explains: “the mask is a beautiful piece of work, but, ultimately, it sat on top of [Hugo’s] face, with all that that entails. It bulged over his neck, over the back of the head, it had too prominent a chin in some shots (…). Hugo's performance pushed the mask into places which prosthetics couldn’t anticipate”. So, what did they do in addition to remove the nose? Well, in the end, they had to recreate a full 3D version of the head, among other things. Here’s the list: VECNA - STRANGER THINGS With Vecna, the Duffer brothers, directors of Stranger Things, wanted an iconic villain, akin to the Night King. So, it was only logical they contacted the man who brought to life the Game of Thrones villain: Barrie Gower. Inspired by the concept art made by Michael Mayer, the team at BGFX made a full body cast of the actor, in order to sculpt and mold up to 25 different prosthetics. In total, it took around 8 hours for the team of make-up artists to apply and paint all the body, head and face appliances. “It was very clear from day one that we would work very closely with the VFX team”, explains Bower to Vanity Fair, in order to make enhancements, like the removal of the nose (painted black with white dots), but also the moving vines all over his body. In addition to that, although they sculpted and practically created Vecna’s left hand, it had to be completely replaced by a CG one, because “the on-set practical suit wasn’t enabling to have proper acting with it. So, every time you see this mutated hand, it’s the work of the animation team”, as explained by Julien Héry, VFX supervisor at Rodeo FX. See the full breakdown here.

  • To AI or not to AI?

    That is the question in the filmmaking industry right now (and probably everywhere else). As with any new technology, there's tension between innovation and tradition, and AI is no exception. In the filmmaking industry, although revolutionary, it's also sparking significant backlash (we still remember the opening title sequence of Secret Invasion). Filmmakers, actors and audiences are grappling with questions about the limits of AI's role in creative processes. Here are some of the most recent controversies that highlight the debate over AI's place in filmmaking. Where do you think is the limit? REMASTERING OLD MOVIES WITH AI Recently, AI technology has been used to remaster classic films in 4K resolution, including James Cameron’s “True Lies”, “Aliens” and “The Abyss”, receiving mixed reviews. Some say that having taken away the grain, among other pristine enhancements, makes everything feel less real, even a bit weird. Which raises questions about the balance between enhancing image quality and preserving the original aesthetic. However, this kind of backlash is not new. Indeed, in 1998 when “Titanic” was released on LaserDisc and VHS, significant work was done to erase imperfections from the negative. Yet some viewers objected, insisting that the original flaws, like scratches, should remain. Geoff Burdick, an executive at James Cameron’s Lightstorm Entertainment, told The New York Times that “there were a lot of folks who said, ‘This is not right! You’ve removed all of this stuff! If the negative is scratched, then we should see that scratch.’ People were really hard-core about it”. So, todays reaction came as no surprise to him. AI-GENERATED PROMOTIONAL MATERIAL A24's latest film, "Civil War", released AI-generated posters depicting chaotic scenes. Fans were quick to notice wonky details (such as a 3-door car), raising questions about the impact of AI on real artist who could have done it. Some even called it false advertising, as the images did not appear in the movie. However, a source told the Hollywood Reporter, that “the entire movie is a big ‘what if’ and so we wanted to continue that thought on social — powerful imagery of iconic landmarks with that dystopian realism”, and that is why they did this campaign. ©A24 - Instagram page Last year, it was Disney who was accused of using AI to generate a poster to promote “Loki”, although the company later debunked it, according to Mashable. GENERATED ELEMENTS WITHIN MOVIES The horror film "Late Night with the Devil" faces backlash for using AI to generate three 1970s-style title cards. Some people on X called for boycott, others claimed that it starts with small things – like three title cards -, but ends up undercutting and underpaying artists. The writer-directors brothers Cairnes responded to the controversy by telling Variety that “in conjunction with our amazing graphics and production design team […], we experimented with AI for three still images which we edited further and ultimately appear as very brief interstitials in the film”. Left: movie poster / Right: AI card generated ©IFC Films and Shudder ​In another case, AI-generated posters appeared in an episode of "True Detective", sparking discussions about AI's use in background imagery and its impact on the series' authenticity, as Futurism reported. Last year, it was Netflix Japan who was under pressure after they announced on X they used background art generated with an AI for an animated short called “Dog and Boy”. AI GENERATED VOICES In the 2024 remake of "Road House," allegations arose that AI was used to recreate actors' voices during the 2023 SAG-AFTRA strike. According to Looper, R. Lance Hill, the original writer, filed a lawsuit against Amazon Studios and Metro-Goldwyn-Mayer, claiming AI was used for Automated Dialogue Replacement (ADR) to speed up production​. This raised concerns about using AI to replace actors' work (during a strike or not). However, a spokesperson for Amazon refuted the claims. On the other hand, AI has been used for positive purposes. In 2022, Fortune reported that Sonantic, an AI-based technology company, "masterfully restored" Val Kilmer's voice, which he lost after a two-year battle with throat cancer. However, Paramount clarified that this technology wasn't used in "Top Gun: Maverick", despite rumors to the contrary​ – but they also said the movie had Zero CGI, which… you know​. AI USE IN DOCUMENTARIES While some uses of AI in fictional movies may be acceptable (although not without controversy), it becomes a different story in documentary filmmaking, where authenticity is crucial. Netflix faces criticism for its documentary, "What Jennifer Did", which allegedly used AI-generated images without clear disclosure. Futurism were the first ones to point out the inconsistencies in the images that depict Jennifer Pan's "bubbly" personality. However, in an interview with The Toronto Star, executive producer Jeremy Grimaldi said: “The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source”. Thus, eluding if they used or not AI tools to modify it. Regardless of it, for those who have tinkered a bit with AI, the images do raise severe questions and the transparency of AI use (or absence of it), crosses a critical line into malpractice. SO, WHEN TO AI AND WHEN NOT? The question of whether to embrace AI in filmmaking or avoid it remains a hot topic. While AI has undoubtedly made some tasks easier, sometimes reducing the need for larger production teams, it can't replace human creativity and insight. The backlash against AI by the audience often stems from a lack of transparency or fear that technology will erode the artistic integrity that filmmakers and film lovers value. However, these reactions can drive filmmakers to use AI without full disclosure, leading to greater mistrust, akin to the “zero CGI” campaigns. So, when should AI be disclosed? In documentaries and other journalistic works, transparency seems crucial. But in fictional films, the line is less clear. Should we require studios to disclose every AI tool used and therefore how it was used? What about other softwares or even machines, like sewing ones? Seems a bit excessive. Ultimately, the debate over AI in filmmaking reflects a larger struggle between innovation and tradition. But did you know that AI has actually been part of the industry for a few years now?  We only know about it now. In our next article, we’ll delve into the history of AI in filmmaking.

  • We tested various AI music generators

    We often discuss image and video generators, but let’s shift our focus to music generators — a burgeoning field in the AI landscape. With a plethora of options available, we decided it was time to put some of these tools to the test to gauge their evolution and what they currently offer. These generators operate by analyzing extensive datasets of music, learning from a variety of styles and compositions. Users can specify parameters such as genre, tempo and mood, guiding the AI to produce music that aligns with these preferences. As filmmakers and musicians ourselves —fact, check out our Spotify—, we find these tools interesting, as they help quickly sketch out song ideas or create simple tracks for social media videos or others. COPYRIGHTS AND LICENSING WITH AI-GENERATED MUSIC Now, before we go on, keep in mind that the rise of AI-generated music obviously brings up significant questions regarding copyrights. So, it’s essential for users to understand the terms and conditions of usage and licensing, as they can vary greatly between different services. Some platforms might retain rights to the music created, while others may offer complete ownership to the user. Some allow you to monetize the song, but only if you have paid plans. Always review the licensing agreement carefully to ensure that you retain the rights needed for your intended use, may it be ads, content creation or even short films and movies. TESTING AI MUSIC GENERATORS WITH THE SAME PROMPT To compare the capabilities and outputs of different AI music generators that you can try out right now, we used the same prompt across different platforms. This experiment highlighted the strengths and creative diversity of each tool. Prompt: “Make a grunge song, very passionate and depressed, the likes of Something in the Way. Make it about self-doubt, control and aliens” UDIO - As the newest kid on the block, UDIO is breaking the internet. Despite being in its beta testing phase, the results it produces are quite impressive. It effectively captures the intended genre and the voices are convincing. It generates two songs per prompt, each approximately 30 seconds long, complete with lyrics. The songs can be extended, if needed. Currently, the control over the output is limited; you can only adjust a few parameters such as prompts, tags, whether the lyrics are custom or generated, and if the composition should be instrumental. SUNO – We first tested it last year and it seems to have had a nice evolution since then. Today, it generates two songs per prompt, each with two verses and a chorus. Additionally, it creates lyrics—which you can customize or you can put your own—and an accompanying image for your song. It’s user-friendly, though it lacks extensive parameters to control the final output, similar to UDIO. The voices are decent and it generates some intriguing ideas. SONAUTO – This tool is quite straightforward; you can use a prompt or even a song as a reference, and that's it. It generates three songs per prompt, complete with lyrics. However, the quality leaves much to be desired—it's as if we're dealing with a band that needs more rehearsal. The "singers" require significant improvement and the overall sound is pretty messy. Despite these shortcomings, it does manage to create a song within the specified genre, and it's also fun to hear AI fail. :) BEATOVEN – This tool is primarily designed to create moods and atmospheres rather than full songs. We conducted some tests with it a year ago and noticed little evolution in its capabilities since. While it offers several parameters to control aspects like emotions, instruments, tempo and genre, it hasn’t fully satisfied our requirements or lived up to our expectations. Disclaimer: they don't allow downloads unless you pay a subscription, so here's the one we created a year ago - as said, the concept is similar, more atmospheric than songs per se. SOUNDRAW – This one is very different from the others. It doesn't require a text prompt; users simply set the length, tempo and genre. It generates—or rather, spurts out—a multitude of song ideas that sound more like MIDI tracks, serving as a foundational base for further creative development. The tool also allows users to “shorten intros, rearrange choruses, and personalize your song structure”, as described on its website. Although it doesn't support adding vocals for this genre, our tests with trap music revealed it occasionally inserts brief 'hey' sounds—not full singing voices. We recommend keeping an eye on this tool if you're a musician. However, for filmmakers, it might not be the ideal choice. Disclaimer: they don't allow downloads unless you pay a subscription. SPECIAL MENTION: AIVA of Nvidia - it offers the ability to create specific and customized music. Unlike others, it doesn't accept text prompts. Instead, you can create a song based on a style, a chord progression, step-by-step adjustments, or musical influences. It's more complex than the others, and we plan to explore it further. However, for the purposes of this article, it's not included in our main comparison. What we've observed so far is its limitation in recognizing the 'grunge' genre. It's also worth noting that this tool seems to be particularly suited for producers and musicians, rather than filmmakers or general content creators. We also explored other tools like Mubert and Stable Audio, though these didn't quite capture our interest or provide the fun results we hoped for. And there's still a wealth of AI technology out there to explore, such as Soundful and the upcoming MusicLM from Google. In conclusion, much like other types of AI generators, those that generate music can be both fun and useful. It’s essential that we harness them wisely to ensure that creativity flourishes without stagnation, while also respecting the rights of all creators involved. How we use these AI tools will significantly shape the future of art. What is certain is that these music generators will enable more people to explore their musical potential, paving the way for a new breed of artists to emerge.

  • "Dune: Part Two": how they did it?

    In the past, attempts to bring the story of Dune, by Frank Herbert, to the big screen were made, but without success. Jodorowsky, in the 1970s, envisioned and assembled a dream team to bring the story to life, but the project never received the green light. Then, in 1984, David Lynch's controversial version was released. In 2021, it was Denis Villeneuve's turn and, this time, it was a success. The second installment, released in early 2024, continues in this vein. It has a distinct, epic, and captivating look. But how did they do it? What technologies were used? Here are our top 3 techniques used to bring this monumental science fiction film to life. The Fremen's blue eyes Changing eye color on screen is a well-known challenge, sometimes leading adaptations, like those of Harry Potter or Daenerys Targaryen, to deviate from the original descriptions. Contact lenses, impractical, and manual visual effects, costly in time and resources, limit these changes. For "Dune: Part Two", the DNEG team innovated with a more efficient solution. Unlike the first film, where manual addition of blue eyes was the norm, this sequel introduced artificial intelligence. They trained a machine learning model on shots from the first film, so the algorithm could automatically recognize and color human eyes blue. Although it required adjustments to avoid some errors, such as changing the eye color of non-Fremen characters and some minor touch-ups, this revolutionary method, described by Paul Lambert, VFX supervisor at DNEG, marks a significant advance in post-production techniques. Villeneuve Combined Unreal Technology with Traditional Storyboards The use of the Unreal Engine tool was crucial for planning and producing the film. At the SXSW festival, a panel called "Dune Two, Real-Time Tech & the Implications for Storytelling" highlighted how integrating this technology brought the film to life, thanks to Previs. “I would encourage many people in my position to explore Unreal, to explore other pre-visualization techniques that can help you support your director as much as you can,” Jessica Darhammer, co-producer. According to Jessica Derhammer, co-producer of the movie, given the magnitude of the film and the added complexity of shooting in various locations, including the desert, there was a lot of prep involved. So, they had to align pretty early on the creative side with the logistics. The question quickly became, "practically, how are we going to shoot this in six months?". That's when they decided to use Unreal Engine to previsualize the sets and even the characters. Drones were also deployed to scout locations. The data was then imported into Unreal Engine, allowing them to work in advance on blocking, lighting, shadow areas, sunlight hours, angles, and much more. "You’re not making these decisions in a vacuum. You’re actually looking through the real camera lens and then you can pop out of that view and see what’s required of the scene around it; where can I position my lights? How many lights do I need? [...] And it really allows the filmmakers to all congregate and make informed decisions together that serve every individual department”, confirms Brian Frager from Epic Games. Gladiator Scene on Harkonnen To capture the unique atmosphere of the Harkonnen planet, a specific infrared shooting technique was employed, transforming the images into black and white and giving the scenes an unreal and sinister aspect. The technique used relied on the infrared on the camera sensor, a method already exploited in other films like "Nope," to create the night effect, and even by Villeneuve himself for visual effects in other projects. In this case, the goal was to produce a feeling of scary unreality, where the characters' skin becomes almost translucent. This artistic decision, once made, was irreversible during shooting, highlighting the team's commitment to this particular aesthetic vision. As the director explained to IndieWire: "“I had to warn the studio that there was no way back. It’s not an effect that we did in post-production" and adds, "I love the commitment and the risk of it". This method also posed a real challenge for the makeup and costume departments, requiring exhaustive tests to ensure the adequacy of colors and textures under the effect of infrared. The reactions of materials to specific light and heat conditions were unpredictable; even tattoos hidden under traditional makeup were revealed under infrared.

  • Sora, are we fu**?

    OpenAI made headlines again, this time with their new tool, Sora. A new text-to-video generator, that has created both excitement and concern across various sectors, including in the realm of filmmaking. Why? Because unlike its contemporaries, Sora seems to produce more photorealistic videos (and animated ones), with a lot of movement, in kind of a reliable way. So, here is what you need to know and what we think about it. Prompt: Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee. Sora: A Leap in Content Creation At its core, Sora is another text-to-video model. However, per Open AI’s website, it “is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world”. Which, according to the videos published, is indeed quite impressive. Here’s a bit of info dump: Sora's videos are up to 60 seconds long in full HD 1920x1080. It is not yet available to the public. There is not a release date. It’s still under assessment for critical areas. Open AI will share the progress of their research on their website. You can see some of the videos they are generating over on their TikTok page: Fears and Turmoil Concerns have immediately surfaced about the implications of such technology regarding fraud, misinformation and other possible misuses (including copyright). According to OpenAI, they are “taking several important safety steps ahead of making Sora available”, and add: “we are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model”. Prompt: An extreme close-up of an gray-haired man with a beard in his 60s, he is deep in thought pondering the history of the universe as he sits at a cafe in Paris, his eyes focus on people offscreen as they walk as he sits mostly motionless, he is dressed in a wool coat suit coat with a button-down shirt , he wears a brown beret and glasses and has a very professorial appearance, and the end he offers a subtle closed-mouth smile as if he found the answer to the mystery of life, the lighting is very cinematic with the golden light and the Parisian streets and city in the background, depth of field, cinematic 35mm film. Although this is clearly important, this is not an article about that. This is about its impact on filmmaking, with some saying that we’re doomed. That it’s the “end for directors” or worse. New technologies have always sparked fear and apprehension, but history shows us that the introduction of new technology, while initially daunting, does not necessarily lead to the obsolescence of traditional skills and roles, but more to the rise of set of skills. This is similar to when everyone was going to (magically) become a photographer, because we all have an HQ camera in our pockets. It did not happen. The majority still takes crappy pictures of their food (no offense). Sora and similar tools are unlikely to replace the nuanced expertise of film directors and technicians. But they will certainly change the panorama, as they do offer more individuals new tools to bring their vision to life, potentially enriching the industry with a wider array of stories and perspectives. Technology and Cinema: An Ongoing Evolution Let’s not forget that the film industry has always thrived on technological innovation, from the invention of the camera itself to the use of CGI in creating visual effects, instead of matte paintings or stop motion. Each advancement has brought changes, opening new avenues for creative expression. Sora, in this light, is but the latest chapter in filmmaking's ongoing evolution, offering tools that were the exclusive domain of Hollywood to a broader audience. Prompt: Borneo wildlife on the Kinabatangan River The use of stock footage may become obsolete, although further testing is required to determine its integration into filmed scenes. We have tested numerous AI tools to assess if they're production ready. As of the publication date of this article, few have reached that stage, including Adobe Photoshop's AI generative tool, which we utilized to digitally demolish a large building (would you like to know how we did it? Let us know in the comments!). That's another thing... it will change how we make VFX (again!). So, let's brace! AI Video Generators – a new medium Storytelling The rise of AI video generators, such as Sora, marks an exciting evolution in digital content creation and filmmaking. However, Sora is not the only player in this field. Google is also researching its own technology, called Lumiere, and Pika emerges as a strong competitor to Runway. The latter has even introduced specific features like zoom in/out, pan left/right, alongside the traditional text and image prompts, which is indeed very cool. And these are just a few examples! These tools promise to democratize video production. Yet, our testing reveals a more complex reality. While they empower creators with new forms of expression, mastering these platforms often requires a blend of creativity, technical skill, and patience. Which means the rise of a new type of artists. The allure of AI-assisted video creation is undeniable, yet it's accompanied by a learning curve and an inherent element of randomness that challenges the notion of 'effortless' content generation. It is not as magical as it seems. Embracing the Future AI is bringing forth a new era for ideas. It offers a new lens through which to view creativity. Before, you not only had to have the idea, but also the know-how (how to paint light, chose colors, etc.). Now, it’s more about the lengths of your imagination and pushing it. This shift is what we find both exhilarating and intimidating at the same time. Indeed, with AI, anyone can create an aesthetically pleasing image, but it’s yet another one in an endless pool of content (which may lead to a potential 'standardization' of art). So, the real questions are: what are you going to ask it? How are you going to edit it and make it your own? How are you going to use it to push your own creativity? Prompt: A petri dish with a bamboo forest growing within it that has tiny red pandas running around. If you’re a filmmaker or an artist, just give it a go! There are hundreds of AI tools today (and not only video generators). See which ones can fit or enhance your workflow, adapt to your pipeline, streamline processes and even help you in exploring new ways to shape an idea. Use AI to visualize pitches, to help you write dialogue in a language in which you're not fluent, and discover fresh strategies for transferring ideas from your imagination to paper and, eventually, onto the screen. Thus, the mixed feelings of excitement and concern surrounding technologies like Sora are understandable. However, their true value and impact will be shaped by our choices as artists in how we adopt and integrate these tools. By viewing them as enhancements to human creativity and expertise rather than replacements (this also holds true for studios… cough cough), the filmmaking industry is set to continue its evolution, crafting stories that continue to inspire and bring amazement.

  • 5 Must-Watch film documentaries!

    To make a film, an incredible variety of talents and a diverse range of artists are needed. Behind-the-scenes footage often highlights directors, actors and actresses, but what about the rest of the team—the technicians, designers and unseen creatives tasked with bringing to life a world, a vision that doesn’t yet exist? To shed light on these essential but often overlooked aspects of filmmaking, we've curated a list of 5 film documentaries we highly recommend. Each one offers a glimpse into the creative process, celebrating the imagination, innovation and relentless dedication required to craft the movies we love. Creature designers - The Frankenstein complex This documentary explores a century of monster creation in the cinematic universe. It highlights the work of artists who, like Frankenstein, bring to life creatures that have now become iconic on both the big and small screen. Discover the world of special effects (SFX) masters who, equipped with a simple idea, a piece of cardboard and tons of silicone, shape the nightmares that haunt our nights. Featuring interviews with renowned artists in the field such as Rick Baker, Alec Gillis, Phil Tippett, Matt Winston (Stan’s son), and more, this documentary is a must-see! Because even though we do a lot of VFX, we always advocate for a mix of techniques. The movies that made us de Netflix Less known but equally fascinating, this documentary series, now in its third season, delves into the behind-the-scenes of cult movies. From "Jurassic Park" to "Die Hard" and "Pretty Woman", to a special Halloween season featuring movies like "RoboCop" and "Friday the 13th," this series unveils the secrets (and misadventures) of movie production. You'll discover a variety of interviews, from directors to screenwriters, set designers to special effects experts. It showcases a wonderful melting pot of all the professions involved in filmmaking. What did we learn? That nothing is really completely under control when making a movie. Robert Rodriguez – the rebel without a crew In this six-episode docu-series, we follow director Robert Rodriguez as he reenacts the exercise that changed his career over 25 years ago: making a movie in 14 days with $7,000. Although today he can make huge savings thanks to the sets he can create directly in the Troublemakers Studio hangars with props from old movies, he shares with us his creative process. This series is for those who love to have the camera on their shoulder; it's filled with tips and tricks from writing to organizing production, working with actors, creating low-cost effects, and the magic of sound and editing. We must say... we are tempted to do it. Who would be up for this adventure? Contact us! Lord of the rings – appendices This is an oldie, we know, but such a goodie. We've never had the opportunity to delve so deeply into the production of a movie as we did with these so-called appendices that came with the LOTR DVDs. For over six hours, dive into Tolkien's universe brought to life by Peter Jackson. The making of the films was an epic in itself, just like the story they told. 5 years of prep, 1 year to film the 3 movies simultaneously, a Herculean task of creating props, thousands of people involved, up to 8 additional directing units, and more than 4 hours of dailies to review each evening. In short, the appendices provide in-depth content on the making of LOTR; or the scaffolding necessary to make a fantasy film. You can watch them directly on Youtube! (ah, that 2000s vibe, right?) Jodorowsky’s Dune This documentary explores filmmaker Alejandro Jodorowsky's failed attempt to adapt the novel "Dune" in the mid-1970s. The film delves into this ambitious, yet unrealized project, which would have presented a unique and avant-garde vision for the market of that era... and perhaps even for today's market. It brought together an unparalleled team, featuring iconic artists and pop culture figures like Moebius and Giger, as well as Dali and even David Bowie. What captivated us is that, even though the film was never made, Jodorowsky's vision paved the way for subsequent science fiction masterpieces such as Alien, Star Wars, and even The Matrix. Have you seen any of those? Which one do you recommend us to watch next? Let us know in the comments.

  • What is... a previs?

    Previsualization, or previs, is a way to visualize film scenes in advance. It helps directors plan and conceptualize complex moments, often weeks or months before shooting begins. ©Previz Orbitae - La Piñata Often done in 3D, this approach is not only reserved for productions with special effects; it is also very useful for films without any. Here's why previs has become indispensable in production: Complex Scene Planning: Does the scene have many characters entering and exiting the frame? A fast pace? Visual effects? Previs helps directors orchestrate each element of the scene, defining what they want to tell and how to tell it. Anticipating camera movements, actor interactions, lens types, rhythm, etc., are all points that can be resolved in advance, ensuring smooth and efficient execution during filming. ©Previz Orbitae - Eyes on the menu Stunt Previs: Executing scenes with a car chase, a gang fight, an avalanche, or an animal attack can seem like a daunting task. Stunt previs is key to defining choreography and coordination between actors and the camera. Notably, depending on the type of stunt, this step becomes crucial in preparing scenes that require millimeter precision and maximum safety. Vidéo extraite de ©Dave Macomber Set Construction: By creating sets in previs, it is possible to determine the desired lighting, how it will be dressed, what can be eliminated and what must absolutely be kept. For example, for "John Wick 3", the set of one of the final scenes was created in 3D, allowing for informed decisions before its construction. Budget optimization: By previsualizing some scenes, directors and producers can significantly reduce production costs by avoiding time wastage on set and optimizing resource use. As we know... time is money. For instance, for Netflix's "Society of the snow", a previs of the plane crash was created, which helped them determine how many sets they needed to build for this particular and complicated moment. Images du BTS - ©Netflix Communication with teams: Previs also serves as a visual reference for all those involved in the project. It facilitates the director to communicate his or her vision and ensures that all members clearly understand the composition and tone of the scene to be shot. Bonus - Experimentation for directors: This process offers great creative freedom. With this tool, directors can experiment with different camera angles, lighting and narrative approaches before making final decisions, thus promoting a richer and more accomplished artistic expression. All in all, previsualization is not just a technical tool; it's an extension of the creative vision, a facilitator of communication and an essential instrument for the effective management of a cinematic project. That's why, at Orbitae, we offer 3D previs services for complex scenes, as well as stunt previs. Interested? Don't hesitate to contact us!

bottom of page