- In Part 1, we talked about winning the visual effects Academy Award, and trying to determine if critical acclaim or box office popularity informed Academy voters' choices.
- In Part 2 we barfed out 23 years of data, comparing each years' nominees' critical acclaim and box office earnings.
- Here, in Part 3 we'll try and make sense of all that data.
How can we possibly compare all of this data that appeared in Part 2? Well, for starters, and simply enough, let's ask two questions:
- For each year, how many visual effects Oscar winners earned the highest critical acclaim?
- For each year, how many visual effects Oscar winners earned the biggest box office take?
Right off the bat, it looks like critical acclaim has been a better predictor than box office, with 15 matches, relative to 13, out of 23 years of data (including a 10-year winning streak). Interesting, but I think this is a far too simplistic way to look at the data, since it doesn't take into account the relative differences between acclaim and box office between the nominees. Plus, acclaim's 2 year accuracy advantage isn't very much, considering our data set is 23 years.
What do I mean by relative differences? Look at the data from 2000. "Gladiator" was the clear victor in critical acclaim, by a wide margin. It was also the box office champ; however, it's victory at the box office was quite slim. And, over time, does a large victory margin in one area dictate the Academy Award?
After I tabulated all of this data, I let it sink into my melon for a while and ultimately came up with two methods of charting. In each category, how much of a margin exists between the Oscar winner's value, and the next highest rated film's value?
Take a look at the next chart. This chart compares the Tomatometer rating of the Oscar winner (in blue), and the film with the next highest Tomatometer rating (in green).
One fascinating curiosity of this chart is that it indicates that critical acclaim for the top two nominees seemed to go together, from year to year. For example, in 1995, both "Babe" and "Apollo 13" were extremely well-reviewed, and the very next year, both "Independence Day" and "Twister" were both relatively panned.
We obviously notice how acclaim accurately predicted the award in 15 our of 23 years, but, interestingly, when it was wrong, it wasn't wrong by much. The margin is quite small for those years it proved incorrect - look at 1993, 1994 and 1995, how the blue line (the winner) follows quite closely to the green line (the next highest nominee), which means it was wrong, but not by much. The exceptions are 1992 (when "Death Becomes Her" won the award over the better-reviewed "Batman Returns") and 2006 (when "Pirates 2" won the award over the better-reviewed "Superman Returns"), where the relative amount of wrongitude was significantly high.
What happens when we look at box office the same way we just looked at critical acclaim? Well, acclaim was judged on a Tomatometer percentage, which gives us a good apples-to-apples comparison. The same simply isn't true with box office. The only fair way I could surmise to compare box office returns between years was to add up all three nominee's box office totals and determine what percentage of all three films did the Oscar winner earn. You can see this data in the pie chart next to the box office returns on each year's charts.
This time, we see a wide disparity between years it was correct, and years where it was wrong. It's all over the map, with deviations of more than 50% from year to year.
The next chart summarizes the previous two charts, comparing the percentage accuracy of the two different predictors, with Critical Acclaim in purple, and Box Office in red.
Check out the purple line, representing Critical Acclaim Accuracy, and notice how its deviation is minor, relative to Box Office Accuracy, and also how often it is in positive territory.
For me, this clinches it. Although it can be wrong, critical acclaim is generally a better predictor of the winner of the Academy Award for visual effects than box office popularity. In the past 23 years, acclaim has been wrong by a significant amount only twice, while box office has been wrong far more, and with less consistency than acclaim.
And, to state the obvious, this theory surmises that critical acclaim, which initially drives the wave of publicity, such as "For Your Consideration" advertisements, Oscar "buzz," and the self-fulfilling prophecy of "If people are saying it's Oscar-worthy, then it must be Oscar-worthy"... all of which ultimately informs Academy voters.
Whew... that was quite a journey. Was it worth it? Probably not. And there are probably a few cracks in the data, too.
Sneak peek: Oh, you know where this was going. On January 22, the nominations for the 80th Annual Academy Awards will be announced. And we'll be here, with statistics in hand, to test our theory and try to predict the winner of the visual effects statuette. And here's the link to Part 4.
Mike V writes:
ReplyDeleteGreat blog
Have you looked at comparing Advertising money spent by the majors on Promoting their Films prior to the academy announcing the picks of the year? Can they buy their way to the awards evening?
Great question. Unfortunately, there is no real way to track the amount of money studios spend on "For Your Consideration" ads, screeners, and advertising directly aimed at Academy (and other awards) voters. The amount of moolah they spend is their business, and they're not going to brag about how many millions of dollars they spent on those ridiculous ads in The Hollywood Reporter and Variety. Except for, maybe, the Weinsteins. :)