Can AI Predict a Movie's Success? Algorithmic Screenplay Service 'Scriptbook' Causes Major Backlash
Is AI the future of screenwriting? Not if screenwriters can help it.
"Subjective decisions lead to box office failure," reads a tagline from the new algorithmic service ScriptBook, which claims to predict a screenplay's critical and box office success.
For a price of $100 a pop, ScriptBook users upload their screenplay to be analyzed by ScriptBook's patented software, Script2Screen, which generates an AI-based assessment indicating the commercial and critical success of a project, along with "insights on the storyline, target demographics, market positioning, distribution parameters," and more. ScriptBook trained its algorithms to detect patterns that compelling storylines have in common based on a dataset of scripts which have had a theatrical release between 1970 and 2016.
"The added value of our technology," the website further reads, "lies in the improvement on the current human decision-making process throughout the spectrum from script to screen, limiting false decision-making while maximizing the potential."
If anything, we should be striving to make scripts more human, not less.
Nadira Azermai was inspired to create ScriptBook after witnessing firsthand one of the most egregious box office flops in history. In 2003, the Jennifer Lopez and Ben Affleck-starring rom-com Gigli took in just $7.2 million at the box office from a budget of $76 million. The movie scored 6% on Rotten Tomatoes. Azermai, an intern for the production company at the time, believed this studio hemorrhage could have been circumvented at the script stage. She created ScriptBook in order to assess the supposed inherent predictive value of screenplays.
"It turns out that the problem happens when you receive a script as a production company or studio," Azermai told IB Times. "They greenlight the wrong project, and because of that, they lose money early on."
It's not hard to understand why Azermai yearns for a better system. Hollywood executives have been trying to crack the code of box office success since the dawn of Tinseltown. The odds are rough: the average screenplay has a 0.003% chance of seeing the big screen and almost 90% of films lose money at the box office (while 6% account for four-fifths of Hollywood's total profit). Last year, Relativity Media went bankrupt trying to beat the system with an algorithm that used the Monte Carlo method to build a model of predictive theatrical success. In the case of Relativity, it turned out that humans build complex models—especially those with many unknown variables, such as is the case with a film—better than computers can.
Yet ScriptBook persists. On its website, the service claims that "in an analysis of 62 films from 2014-15 that were approved by studios, ScriptBook’s algorithm, which judges scripts against 220 parameters based on what has and has not worked before, would have called 'no' on 22 of the 32 loss-makers." Furthermore, the AI-decision support system gave a greenlight to all films that performed well at the US box office (30).
Our goal is to provide writers another tool to help them analyze their work. This product does not replace the evaluation service performed by our team of professional readers — instead, it offers a new, cutting-edge way to look at screenplays. It provides objective metrics and analysis on a very subjective endeavor. Our philosophy is that machine learning combined with real human taste and intuition can help us understand the world better than either alone. Increasingly, these tools are being used by studios and production companies to make decisions, so we want to offer such a tool to writers at the lowest price point possible.
"[W]e believe adding computer intelligence to human intelligence is an exciting path to analyzing scripts in a smarter way," wrote The Black List's Director of Product and Data, Terry Huang.
A barrage of backlash followed on Twitter. Keith Calder, producer of Anomalisa, tweeted: "I have a problem with literally every paragraph and chart in this Blacklist [sic] blog post about ScriptBook."
He went on:
WTF is this chart? Look at the bizarre "creativity score" metric they invented. HEAT is significantly less creative than THE IRON LADY? pic.twitter.com/EAzRKXfrFW— Keith Calder (@keithcalder) April 19, 2017
Back to the audience rating chart… Am I really supposed to believe THE AVENGERS has a lower "audience rating" than G.I. JANE? What audience? pic.twitter.com/T5FMSj3uVr— Keith Calder (@keithcalder) April 19, 2017
The sample charts are all for the FENCES screenplay. Which means ScriptBook thinks the teenage son in FENCES is only 24% likable… pic.twitter.com/4oDkEtJI1t— Keith Calder (@keithcalder) April 19, 2017
I have to wonder how it's even possible that August Wilson was able to write "Fences" without the help of ScriptBook's "objective" analysis.— Keith Calder (@keithcalder) April 19, 2017
But for real… The mother and son in FENCES are both under 30% "likable" but Troy (one of the least "likable" protagonists ever) gets 55%— Keith Calder (@keithcalder) April 19, 2017
If I had to create fake charts to make fun of ScriptBook, it would be hard to outdo the real FENCES analysis charts they're actually using.— Keith Calder (@keithcalder) April 19, 2017
But ScriptBook is obviously snake oil garbage masquerading as an "objective" tool. Writers and execs, please give this a hard pass.— Keith Calder (@keithcalder) April 19, 2017
And I'm not super psyched that ScriptBook is being endorsed by a site that I generally consider to be a force for good in the film industry.— Keith Calder (@keithcalder) April 19, 2017
Others agreed. Brian Koppelman, co-creator of Showtime's Billions and writer of The Girlfriend Experience and Solitary Man, had this to say:
The Black List founder Franklin Leonard issued a mea culpa yesterday, which announced the abrupt termination of The Black List's partnership with ScriptBook. "The last 48 hours have made it very clear that an overwhelming majority of the writing community believes that the report has little value.... We’ve heard you, and the report is no longer available via the Black List," he wrote. (According to Leonard, Craig Mazin and John August were also vehement opponents of ScriptBook.)
On its website, ScriptBook offers Passengers as a case study for its algorithm's predictive success. While the film was still in pre-production, ScriptBook analyzed the script and forecasted a $118.1 million box office hit in the US.
Yet Passengers was met with inspired vitriol from critics. It has a 31% rating on Rotten Tomatoes, with the critical aggregate service concluding the film has a "fatally flawed story." Just yesterday, Chris Pratt admitted to Variety that he was "caught off guard" by the critical backlash the film received.
But he did point out an area in which Passengers was a success. "I'm proud of how the movie turned out," said Pratt, "and it did just fine to make money back for the studio." He's right— Passengers grossed nearly $300 million worldwide against its $110 million budget. (In the US, the movie grossed just over $100 million.)
If ScriptBook sells itself as "an AI-based assessment that indicates the commercial and critical success of a project," its banner case study has only made good on one of its promises. Critical and commercial success are not always mutually exclusive (see: Moonlight), but they often can be. This is at the root of the indie film industry's sustainability problem.
In the Hollywood world, the most notable instances of box office and critical success occurring in tandem can be found over at Pixar. 90% of Pixar-animated films are above the benchmark of spend to box office sales, and, with a few exceptions, every Pixar film is a critical darling. However, Pixar assumes much risk to generate their hits—the very risk that Relativity tried and failed to eliminate.
Pixar's model works because of its rigorous emphasis on story. Projects spend years in development, often returning to the drawing board to undergo extensive scrutiny (by human brains). Pixar films are successful because their average return is high enough to compensate for the excessive risk or variance. And that average return all comes back to story.
Last year, we reported on one of the first screenplays written by an algorithm. (It was delectably Lynchian, but it made no sense.) We also published an article about one of the first features "co-written" by an algorithm, which combed successful genre film scripts in search of the best combination of horror plot elements to draw in audiences. (The verdict: computers aren't advanced enough to write screenplays—only to predict some high-performing elements, which then have to be turned into a screenplay by a human.) It seems that the age of algorithms has permeated art and creativity, arguably the most human of endeavors. Will we reject it or embrace it?
Here's what AI can't do when "watching" a movie: It can't appreciate the words not spoken. It can't imagine the way things might've gone differently for a character, causing feelings of empathic regret and sadness. It can't imagine what might happen next and delight in the anticipation. It can't see its own experiences in a character's and suddenly feel less alone.
While subjective decisions can surely "lead to box office failure" in Hollywood, as Azermai said, subjectivity is what makes us human. If anything, we should be striving to make scripts more human, not less.
If ScriptBook wants to weed out the blockbuster flops and cherry-pick the money-makers, that's fine by me; after all, Hollywood is a business. So, sure, ScriptBook—go ahead and eliminate human error in the context of a money-making machine. But I have one plea: keep art and personal stories out of it.
An algorithm can predict whether two people have things in common, but it can't predict their chemistry. It's no different with movies.