AI and the Auteur: Implications of Using Artificial Intelligence in Film Studio Decision-Making – JURIST
JURIST Guest Columnist Kelsey Farish, a technology and intellectual property lawyer at DAC Beachcroft, discusses the legal implications of AI’s use in the greenlighting process of film making…
The global movie industry generated over $43 billion in revenue in 2018, of which the United States’ contribution alone topped more than $11 billion. Yet, these seemingly impressive headline figures can obscure the fact that year-on-year growth has been a sluggish 2 per cent over the last several years, with market researchers forecasting further stagnation. Given the inherent financial risk involved in film making, some now believe artificial intelligence, rather than human expertise, is best placed to select which films are most likely to provide suitable returns on investment.
In early January 2020, Warner Bros signed a deal with Cinelytic, a Los Angeles-based artificial intelligence company which, according to the press release, aims to help content creators make faster, better-informed decisions through predictive analytics. Belgium’s ScriptBook provides a similar service, touted as “artificially intelligent script analysis and box office forecasting”. Warner Bros is not the first film studio to pair up with an AI platform of this type, although it is one of the first to disclose its collaboration publicly.
From script to screen: the greenlighting process and how AI may help
Greenlighting is the process by which a studio formally approves a film’s production. The greenlighting decision must take into account not only the initial financing and advertising costs, but the potential for merchandising, licensing, spin-offs, and even theme parks. This process often involves committees comprised of several top executives, not least because of the eye-watering budgets allocated to some endeavors. By way of illustration, the three most recent Avengers films collectively cost over $1 billion to produce. With producers facing increased competition to choose revenue generating, award-winning projects, Hollywood is now turning to artificial intelligence to provide valuable insight regarding which films are likely to be blockbusters – or flops.
In practice, services like Cinelytic and ScriptBook work by mining and analyzing keywords attributable to a film’s themes, dialogue, and actors, with some also offering to analyse the scripts themselves. These data are then fed through machine learning processes to determine certain trends and patterns, and cross-referenced with box office performance, critical reception, and industry awards statistics. When taken together with information such as a celebrity’s popularity in a given region, AI can help streamline the production pipeline and assist studios with choosing anything from actors, plot elements, and marking strategies. Despite obvious benefits however, there are certain potential drawbacks which cannot be overlooked. From the perspective of the author, three key risks seem particularly apparent, namely: the inability to account for outliers; impedance to diversity and inclusion; and denigration of human creativity.
Risk One: Imperfect Predictions and Liability
Firstly, learning from the past does not mean that you can accurately predict the future. Machine learning algorithms rely on sample “training” data in order to build mathematical models. As with any form of statistical modelling, there are always outliers which cannot be accurately forecasted. It is arguable that many classic films could have been flagged as likely ‘failures’ by the likes of Cinelytic and Scriptbook, and therefore refused the proverbial green light in their day. Conversely, it is not unusual for a big-ticket production to flop despite using the best talent in the industry. Take the 1982 film Blade Runner as an example. Despite starring leading man Harrison Ford – who by then had already achieved fame thanks to his roles in Star Wars and Indiana Jones – the film received only mixed reviews and grossed a paltry $33 million against its $28 million budget. The Shawshank Redemption and Fight Club are two more recent examples of films that suffered poor box office performance, but have since endeared themselves to audiences the world over.
From a legal perspective, alarmism over AI-assisted forecasting is somewhat passé. Although these platforms have the potential to disrupt the industry using a relatively new technology, it is unlikely that their contractual terms of supply will differ greatly from more traditional software-as-a-service (SaaS) agreements. In both cases, suppliers of technology solutions – whether machine learning or otherwise – will want to exclude as much liability as possible for any erroneous or incomplete predictions. Likewise, film studios seeking to benefit from the algorithmic output will want to ensure adequate service levels and ultimately, sufficient value for money. In any event, the studios will need to take a commercial view as to how much weight to ascribe any AI-generated predictions and analysis provided, and at what cost.
Risk Two: Bias and the Rights of the Data Subject
A second key concern of using AI in the film industry is that of bias and perpetuated stereotypes. Predictive analytics could be run against a variety of factors concerning potential actors, screenwriters, and directors: such characteristics could include one’s gender, age, race, ethnicity, disability or impairment, sexual orientation, and so on. As we have seen from studies on machine learning in the criminal justice context, algorithms can perpetuate human biases. It is foreseeable that the AI could become path dependent, err on the side of caution, and fail to account for cultural shifts in audience attitudes. For instance, of the top 100 top grossing movies made between 2017 and 2018, only 33 per cent of all speaking or named characters were girls or women. If this metric were analysed in isolation, it is not impossible to consider that a machine learning algorithm would lean towards viewing male protagonists as a safer choice for higher profits. As Norwegian filmmaker Tonje Hessen Schei told Screen Daily, a concern with this new process is that it may become “harder and harder to get a diverse span of voices that will be heard in the market.”
The legal implications or responses on this point are somewhat unclear. To date, the primary concerns of United States lawmakers with respect to AI decision-making have been in the area of autonomous vehicles, including weaponised aerial drones. While the Federal Trade Commission has issued recommendations that promote principles of lawfulness and fairness, no specific legislation exists to protect individuals against AI-enabled discrimination. It does however stand to reason that in the employment context, automated decisions which adversely impact those of protected classes may violate extant anti-discrimination laws.
By contrast, under European Union data privacy law (namely the General Data Protection Regulation) a European resident has the right to ‘meaningful information about the logic involved’ in automated decisions. In other words, European legislation recognises that individuals may be entitled to an explanation concerning the output of an algorithm which directly impacts them. However, this only bites in very limited circumstances, in instances where decisions are made automatically without any human intervention. At present, it is difficult to imagine a scenario where a casting decision (for example) would be made wholly independently of a human executive’s final consideration and approval.
Risk Three: Creativity and Copyright in the age of AI Authorship
The movies capture a very particular space in the human imagination. They can be used to elevate storytelling to incredible heights of fantasy, as well as to cast a spotlight on those narratives and factual events which would otherwise be overlooked. For that reason, the third and perhaps most philosophical issue arises from our collective understanding that cinema is an inherently human art form. Despite the fact that the development of theoretical frameworks and financial models to predict box office performance is nothing new, to what extent are we amenable to involving computers in the process? On the one hand, audiences and critics alike applaud computer-generated special effects. We are also becoming increasingly comfortable with artistic usage of digital de-ageing and deepfaking of celebrities. But we seem to struggle with the idea of technology transitioning from artist’s tool, to an autonomous authority in the creative process.
Scriptbook’s website exclaims that its platform is progressing towards “co-creation between man and machine [in] the era of AI as a co-writer”. While many will be aware of the often nonsensical (and indeed humorous) outputs of some AI chatbots, in 2016 a film director and AI researcher trained a machine-learning system using classic science fiction screenplays including Avatar, Minority Report, and 2001: A Space Odyssey. The result was a nine-minute short film entitled Sunspring which went on to win an award at the Sci-Fi London Film Festival. Needless to say, as the sophistication of machine learning continues to improve, deals such as the one between Warner Bros and Cinelytic give rise to fascinating philosophical questions of originality, creativity, and even ontology.
Legal academics and practitioners alike have been considering the legal implications of computer-generated works for decades. Undoubtedly, works created using AI could have very important implications for copyright law. Importantly, in many jurisdictions including the United States and the United Kingdom, current copyright law prevents authorship from vesting in non-human creators. But by extrapolation, such works could theoretically be deemed free of copyright protection and – problematically for film studios – fall into the public domain. Some scholars and policymakers suggest a reimagining of the work-for-hire doctrine to alleviate this problem, which would grant copyright to the person(s) operating the AI in the first instance.
Of course, the use of AI in the film industry is only one example of how human behavior and cultural trends can be studied through the quantitative machine analysis of digitized text. And clearly, there are many ethical and related legal risks that must be considered by a business using AI, regardless of sector. In this increasingly dynamic market, the full implications of artificial intelligence in the film industry are likely to remain opaque for the foreseeable future. But if AI used behind the scenes can help film industry stakeholders create more binge-worthy series and award-winning feature films, audiences are likely to be happier, even if the “movie magic” is more algorithm, less auteur.
Kelsey Farish is a technology and intellectual property lawyer at DAC Beachcroft, based in their London office. She has a particular interest in artificial intelligence and the media industry, and her work on deepfakes has recently been published in the Oxford Journal of Intellectual Property Law & Practice.
Suggested citation: Kelsey Farish, AI and the Auteur: Implications of using artificial intelligency in film studio decision-making, JURIST – Academic Commentary, January 25, 2020, https://ift.tt/2GuJcgl
Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST’s editors, staff, donors or the University of Pittsburgh.