Much has been said about Steven Spielberg’s uncanny ability to tell a story within a frame and set up geography, and usually folks use his long oners as examples of this talent. In fact, we break down a "Raiders of the Lost Ark" (1981) oner on the Lighter Darker: The ILM Podcast, in Episode 3 (starting around 27:35).
Monday, January 13, 2025
"Indiana Jones and the Last Crusade" Mini-Oner
Wednesday, December 11, 2024
"Skeleton Crew", Neel and Misinformation
The cycle of downplaying or mischaracterizing digital effects is becoming distressingly predictable. I wrote about it at length, using the marketing of "Gran Turismo" — a film I didn't work on — as my example.
When the misinformation comes for a project that I worked on (which has happened many times in the past), it becomes particularly infuriating.
In the lead-up to the release of "Star Wars: Skeleton Crew", Slashfilm wrote a piece on December 1st that loudly and proudly proclaimed that one alien character on the show was not created with digital effects. The headline "The Star Wars: Skeleton Crew Character Who Surprisingly Isn't CGI" isn't exactly leaving any wiggle room. The piece says "The elephant-like design of main character Neel (Robert Timothy Smith) may look a whole lot like a fully CGI creation, but that's actually quite far from the truth."
In reality, Neel was brought to life using a combination of techniques, including digital effects. Clayton Sandell documented this on Bluesky, based on interviews with "Skeleton Crew" ILM VFX supervisor Eddie Pasquarello and ILM animation supervisor Shawn Kelly.
From Clayton's reporting:
"Neel is a beautifully-creative mix of practical AND digital techniques: the voice & physical performance of young actor Robert Timothy Smith and a stunt performer; digital animation; and puppetry... Kelly says that in about HALF of all shots, however, the Neel puppet was either augmented digitally or replaced entirely, depending on the storytelling needs. In this shot from episode one, for example, Kelly says Neel’s head is 100% digital."
I was the compositor on this shot. (I was a lead artist on "Skeleton Crew" at ILM). Imagine my shock when I read the Slashfilm headline that invalidated the hard work our team put into a character, and saw a false mythology form right in front of our eyes.
On Bluesky, I politely asked Slashfilm to correct or amend their headline and article based on Clayton's reporting. And they did.
The new December 10 headline is "One Star Wars: Skeleton Crew Character Is A Stunning Blend Of Incredible Visual Effects" which is much better, and extremely accurate. A key sentence was added to the piece, as well:
"Neel was made using a stunning, seamless combination of practical and digital effects."
(Hilariously, the Slashfilm URL remains as it was originally published, which includes the string: 'star-wars-skeleton-crew-character-neel-not-cgi'.)
I humbly yet forcefully ask media outlets not to fall into the false mythology trap. Do not proclaim that a certain set piece, stunt, or effect from a movie was done "completely" with any one technique without having absolute certainty of the facts. Making movies is a team effort, and there isn't a "war" between the practical effects teams and the digital teams. We're all working together to make the best movie we can — we are in a symbiotic relationship, and anyone who tells you otherwise is trying to sell you a false mythology.
Friday, November 08, 2024
Todd Vaziri on The Incomparable, Talking About "Pitch Black"
I recently guested on The Incomparable to talk about one of my favorite science fiction movies, "Pitch Black" (2000).
Host Antony Johnston with Erika Ensign, Tony Sindelar, and Todd Vaziri. Vin Diesel, Radha Mitchell, Keith David, and a lack of bozos… It can only be 2000’s “Pitch Black,” one of the finer entries in the always-popular “Alien” homage movie genre. We enthuse about elevated filmmaking, great decisions, and low-budget effects.
Listen: https://www.theincomparable.com/theincomparable/741/
Friday, October 18, 2024
Center Framing is Not New
Seen on social media: "One thing that I did not like at all about The Substance was how it was filmed as if being cut into TikToks was its ultimate end goal. The action in every scene happens pretty much in the middle of the screen... It just looks so lifeless."
I was going to go off on the sad state of media literacy in today's culture, but I reconsidered and thought I'd rather do something fun instead.
The original post implies that the filmmakers of "The Substance" (2024) chose to center-frame their film so that it would look good on TikTok. Which is absolutely bonkers. It also implies that there was very little artistic intent behind the framing choices of the movie.
Just to illustrate the lunacy of implying that the central reason for center-framing a movie is TikTok, I decided to drop actual frames from "The Shining" (1980) -- a film with prominent center-framing -- into an iPhone 16 screen without doing any repositioning or scaling.
Who knew Stanley Kubrick made his film to look good on TikTok?! Amazing foresight from the master filmmaker!
Wednesday, October 16, 2024
This Shot from "Seven" is Not a Visual Effects Shot
A filmmaker friend reached out to me with a question about one of our shared favorite movies of all time, so I did what I sometimes do - I went totally overboard to find a satisfying answer and then wrote a long-winded article about it.
• • • •
Near the end of David Fincher's 1995 masterpiece "Seven", John Doe takes Somerset and Mills to the middle of nowhere to reveal his final surprise. They drive to a desolate area surrounded by high tension power lines and towers. A combination of long lenses and wide lenses were used to alternate between images of long-lens compression of the space (the first image below), and scattered wider lenses to illustrate the desolation of the environment (the second image below).
Then comes this gorgeous shot, which happens to be one of my favorite single shots in the movie. A simple, slow tilt down of the car racing down the road, filmed with a long lens. It's breathtaking because it looks other-worldly, and some of that is due to the visual "compression" that happens to a scene filmed with a telephoto lens: objects that are far apart from each other "compress" in depth to look like they're actually existing very close together in real-world space. Filmmakers make lens choices to give a scene a deliberate, artistic feel. It's one of the many tools in a filmmaker's toolbox.
Write a blog post about how it's not true. Tweet how it's not true. Do a Myspace post. Type it out, make some photocopies and post them in your neighborhood. Flood the channel with truth!
Friday, October 04, 2024
Lighter Darker: The ILM Podcast
Welcome to Lighter Darker: The ILM Podcast, where we focus on the creative process of filmmaking and the art of visual storytelling. Hosted by ILM Chief Creative Officer Rob Bredow and ILM Compositing Supervisor Todd Vaziri, we share behind-the-scenes stories that illustrate the many crafts that come together to create a motion picture, TV series, or special venue project.
Whether you’re a seasoned professional, an aspiring filmmaker, or a fan of immersive experiences, Lighter Darker provides valuable insights, inspiration, and a deeper appreciation for the artists behind the projects we undertake at ILM in visual effects, animation, and immersive entertainment. We have a terrific lineup of special guest filmmakers who join the team for upcoming episodes to discuss the creative process of filmmaking and the art of visual storytelling.
Sunday, June 23, 2024
Hal Hickel on Creating Tarkin
Back in 2020, Hal Hickel answered a Quora question with great detail about how we created Grand Moff Tarkin for "Rogue One" (2016), and in the interest of film history preservation, I got Hal's permission to reprint it here. (I was a lead on the digital human team at ILM for Tarkin, and worked closely with Hal on the film.)
Hal summarizes our process succinctly, and corrects many misconceptions and untruths about how we made Tarkin, so I feel like this is an important document. To be frank, I hesitate to talk publicly too much about our Tarkin and Leia work for "Rogue One" because for some folks it generates a lot of... emotion.
I, like Hal, have no interest in defending the quality of our work. I'll say this: immediately after the movie came out, I talked to a lot of regular, non-industry people who saw the movie and asked them their thoughts on Tarkin, 'you know, the older gentleman who was Krennic's boss.' Many folks didn't understand the nature of my question, nor why I was asking it. They liked his performance, and didn't think anything further of it. Then I let them know that Tarkin is a digital creation, meant to resemble Peter Cushing who appeared in the original "Star Wars" (1977) and who died in 1994. I got a lot of stunned reactions from relaying that news. A lot of people who saw "Rogue One" had no idea that Tarkin was a digital, synthetic character, and just assumed it was a regular human actor.
I hope you like Hal's piece.
. . . .
Quora: Why does Tarkin's CGI in Rogue One look so plastic-y? Could they have made it look more realistic?
Answered by Hal Hickel, Animation Supervisor for "Rogue One"
Hi, I was the animation supervisor on Rogue One, and as such I was intimately involved with the creation of Tarkin.
I’ve decided to chime in for one purpose only, to clarify the process we used. I have no interest in trying to convince anyone to like the results more than they do, or to argue with anyone about how “real” our work looked in the film. Again, I just want to clarify our process for informational purposes.
The broad plan was to hire an actor, film them on set in costume, and just replace the head with a CG Tarkin head, leaving the real body in the scene. The actor on set would be wearing a helmet with small cameras mounted to it, to record their facial performance (similar to what you’ve seen in the behind the scenes footage from Avatar, or Planet of the Apes).
That’s what we did, excepting that in about 30% of the shots, we opted for full replacement (head and body) with CG, because for certain shots it just made more sense.
Guy Henry was cast because he’s a terrific actor, and had the bearing and vocal quality we were looking for. It was helpful that he also had a certain physical resemblance (high cheekbones, etc), though that was not essential, given that the plan was to completely replace his head with our CG Tarkin. That said, when remapping the facial expressions of one person onto another (Henry to Cushing), the more similar they are, the easier it’s going to be.
The intention was never for Guy to do either a vocal, or physical “impression” of Peter Cushing, but rather to give us a performance that “felt” like Tarkin, both physically and vocally. So we never asked for, or expected a spot on vocal match, or for Guy to smirk, etc, like Cushing.
We didn’t do any modulation or any other audio tricks with Henry’s voice. We didn’t compare waveforms with Cushing audio, talk to his old manager, or any of that other stuff mentioned elsewhere in this thread. We just used Guy Henry’s voice. I’m sure Guy watched the Tarkin scenes from ANH endlessly, and did his best to find a tone and delivery that felt right.
Guy didn’t wear any prosthetics or makeup as part of the process, with the exception of the dots that help us track his facial movement. Someone in this thread talked about “makeup, cosmetics, physical altering”. No. Again, we just put dots on Guy’s face to track it’s movement, that’s all.
Guy was filmed on set, in the costume. The movement of the dots on his face, and his voice were recorded simultaneously during filming. I mention this, because some VFX companies prefer a method where Facial Capture is done separately, on a specialized stage at a later time. We prefer to capture an actor’s performance all at once (voice, body, face) whenever possible.
We also scanned Guy Henry on the ICT Light Stage, to give us a high resolution CG model of Guy Henry, and to capture his skin texture. Now why would we need a CG Guy Henry?
The CG version of Guy Henry (left) and the real Guy Henry as photographed (right), from Rogue One - A Star Wars Story: The Princess & The Governor Featurette
We needed it for a few reasons: One is that once we’ve tracked the motion of the dots on his face in a given piece of performance, rather than immediately applying that motion data to the CG Tarkin, we instead apply it first to the CG Guy Henry. This give us an apples to apples comparison to see if we’ve captured and processed the facial performance accurately. When we’re satisfied that we have, we then apply it to Tarkin.
Another reason, is that having the lighting data that is captured with Guy Henry on the Light Stage, gives us a sort of “ground truth” that we can compare our CG Tarkin to, to see if his skin is reacting to light realistically. Also, because there are many things about the fine details of Guy Henry’s skin that are appropriate for Tarkin’s skin (general tone, pores, etc), we can use the Guy Henry textures as a way to get a leg up on the Tarkin skin textures, rather than starting from zero.
Ok, so we’ve hired an actor, and shot them on location. We’ve built a CG copy of that actor in order to be able to check out facial capture data to see that it’s accurate, and to give us a “ground truth” for the skin texture and lighting.
Now we (obviously) have to build a CG Tarkin.
I noticed some comments in another answer in this thread about his mouth “not being aligned to his chin”, or the ears being “too long”. Again, I’m not here to argue the merits of our work, but I think it’s useful to point out that if you assembled hundreds of photographs of Peter Cushing (as we did), you would find that he can look vastly different from one photograph to another, depending on his expression, the lighting, the makeup, the focal length of the lens, the year the photo was taken, etc etc. So comparing a single frame of our Tarkin to a single photo of Cushing is not a particularly valid way to troubleshoot whatever issues there may be.
Luckily, we didn’t have to work from just photos. We had in our possession a life casting of Peter Cushing’s face. It was made not long after New Hope, so it was very accurate in terms of Cushing’s age, etc. Of course we know that sometimes the process of taking a life cast can slightly distort the face of the subject (the weight of the casting material can pull down on the skin), so we were mindful of that. That casting was a terrific starting point for us, and gave us very accurate information.
Starting from there, a very accurate CG model of Tarkin was created. As well, highly detailed textures, with pore detail, age spots, veins, etc etc. The CG hair groom was challenging, as the styling on Cushing for that role was a bit eccentric.
So taking one shot from the film as an example, let’s say a medium close up:
We track the movement of Guy’s head through space, so we can move the CG Tarkin head in the same way.
We track the dot motion on Guy’s face to extract his facial performance. We apply that motion to the CG Guy Henry, and if we’re happy with how it looks, we apply it to the CG Tarkin. By the way, someone in this thread theorized that perhaps the CG Tarkin was missing “micro expressions”. While we are always trying to increase the accuracy, and detail of our Facial Capture system, I have to say that even now, we are capturing very fine detail, including very tiny, barely perceptible micro movements. We are familiar with Paul Ekman’s work, and the importance of Micro Expressions, and have tried hard to be sure that level of fidelity exists in our work. If it was happening on Guy Henry’s face, it was happening on Tarkin’s face.
Now we have the real Guy Henry body, with the CG Tarkin head. We paint out any bits of Henry’s head that Tarkin doesn’t cover up.
We make adjustments to the facial performance to make it feel more “Tarkin”, since (unsurprisingly), Guy Henry doesn’t use his facial muscles the same way that Peter Cushing did. Guy doesn’t smile like Cushing, doesn’t form phonemes like Cushing, etc. So we have to do a sort of “motion likeness” pass. This is done by our animators, using a very light touch. Note: the point is NOT to change the acting choices made by Guy Henry, it’s just to adjust things so that when Guy chooses to smile, it looks like a Tarkin smile, not like a Guy Henry smile. Of course in doing so, we have to be very careful to maintain exactly what sort of smile it is. We don’t want to transform a mocking, insincere smile into a genuine, warm smile.
The Tarkin head with final facial performance is lit to match the lighting in the footage, and rendered.
The rendered CG Tarkin head is composited onto the real Guy Henry body.
There are of course many many steps to each one of the steps I’ve outlined above. Each one of these steps encompasses the highly skilled work of many many very talented artists and technicians.
So again, like it, don’t like it, that’s none of my business. I just wanted to get the facts out there, in terms of our process, because there was some inaccurate information being posted.
Thanks for reading.
H
Thursday, June 13, 2024
Lighting Techniques and Style
The Hollywood cinematography of the interior of a cave, daytime, in a big sci-fi feature film in 1968. "Planet of the Apes" (1968), cinematography by four-time Oscar winner Leon Shamroy (18 total nominations!) who also shot "Cleopatra" and "The Robe".
The Hollywood cinematography of the interior of a cave, daytime, in a big sci-fi feature film in 2012. "Prometheus" (2012), cinematography by Dariusz Wolski, who also shot "Crimson Tide" and the original "Pirates of the Caribbean" trilogy.
Saturday, June 08, 2024
The Apple HomePod "Welcome Home" Ad was NOT 'All Practical'
In an effort to combat misinformation, I'm going to make short blog posts so maybe, just maybe it can make it into search engine results. Misinformation about how movies, TV shows and commercials is overwhelming, and I feel like I need to do what I can to try and slow it down.
A tweet highlighting the amazing work in the Apple HomePod ad said: "The fact that this Apple homepod ad is all real still blows my mind. The apartment stretching is not CGI, just practical effects, holy shit!"
This is not true.
The Spike Jonze-directed HomePod spot "Welcome Home" from 2018 is an amazing piece of art, due to stunning production design, physical effects, choreography, lighting and camera work BUT ALSO extensive digital visual effects and computer graphics.
Janelle Croshaw Ralla was the HomePod spot's visual effects supervisor. She also supervised visual effects for Jonze' "Kenzo World" spot with Margaret Qualley, and also was visual effects supervisor of "John Wick 4".
My original tweet: https://x.com/tvaziri/status/1799473454019928117