PUBLICATIONS
A Pyrrhic Victory?: Reflecting on WGA’s 2023 MBA

A Pyrrhic Victory?: Reflecting on WGA’s 2023 MBA

Article by Maxi Grindley

Abstract:

This article reflects on the agreement signed by the Writers Guild of America (WGA) following the Hollywood writers’ strike; specifically the provisions concerning the use of AI. Though the parts overtly referencing AI aim to curtail its use, the implicit assumption of its usage in other sections seriously undermines these attempts.

Seeing this in the context of Gilles Deleuze’s notion of the ‘dividual’, the agreement thus helps perpetuate the current prevalence of digital surveillance as a method of control. Indeed, this has become so foundational it can now be considered a core characteristic of our society, as Shoshana Zuboff’s conception of ‘surveillance capitalism’ suggests. 

Though the agreement superficially suggests a clear victory for writers over AI, the truth is more complicated. There is no doubt it is a positive step, but its broader implications reveal a world in which AI is increasingly inevitable and unavoidable to an extent current legislation struggles to deal with.

Keywords: AI, surveillance, society, control, capitalism

Header Image: “WGA Strike 6.21.2023 004” by ufcw770 is licensed under CC BY 2.0.

Introduction

Oscar Wilde has become a mythical figure in literature. His extravagant wit and wicked charisma guaranteed a cult status few authors of his era – or any era – can match. While the popularity of his written work certainly owes something to his ornate eloquence, it is in far larger debt to his mercurial talent for inscribing universal thoughts and dreams such that the reader feels suddenly reminded of the community of humanity. Perhaps other writers have described the realities of everyday life better than he did, but very few have managed to represent the fantasies of the dreamworld like he did. So strident was Wilde’s advocacy for the primacy of imagination (not rationality) being the intangible essence of humanity that he famously declared “life imitates art far more than art imitates life” (1891, p.56). It is a quote that perfectly summarises Wilde’s enduring appeal: replete with punning wordplay and oxymoronic cleverness, it nonetheless is grounded in surprising prescience.  

Today this prescience is, however, receiving significant help from an unexpected source – AI, a computer programming language reliant on decoding the ordered logic of humanity rather than its intangible emotion – proving Wilde’s aphorism truer than ever. On the one hand, there have been specific examples of the developments in AI closely mimicking those predicted by television series, for example in the Black Mirror episode “Joan is Awful” (Brooker, 2023). Salma Hayek’s character fights to prevent a film company’s unrestricted usage of her digital likeness, an eerily similar reason to one of the main motivations for the 2023 SAG Strikes (Bailey & Honderich, 2023). On the other hand, the engrained use of AI (and big data more generally) within the film and television industries has meant that these artistic products are increasingly also a mechanism of societal normalisation. The consumption of art is therefore becoming a means of engendering social conformity, ensuring life becomes more homogeneous and as such more like the art that is producing this effect in the first place. 

The MBA and the Individual

In order to better examine this, it is useful to consider the recent 2023 Writers Guild of America’s Minimum Basic Agreement (MBA), an agreement concluded to end the prior strikes. Following the deal’s conclusion, WGA West President Meredith Stiehm described it in an interview with Deadline Hollywood as “an amazing deal that has meaningful gains for every sector” (Patten, 2023). Whilst other areas were contextually implied (such as wage increases, writers’ room staffing assurances, increased pension and healthcare contributions, which were all listed by the interviewer directly beforehand), of these it was AI that Stiehm specifically described as “an existential proposal – one that had to be addressed now” (Patten, 2023). Indeed, within the MBA (2023) not only is there a specific section devoted to establishing guidelines concerning the use of AI in scriptwriting, but its usage is also assumed and implied throughout much of the wider agreement. Whilst the former section certainly seems to offer hope in terms of the harnessing of AI for positive ends, the latter presumption paints a slightly rockier and more doubtful picture. 

The new article added to the MBA contract (2023) under the header “Generative Artificial Intelligence” indicates the seriousness with which the revolutionary impact of AI is being treated. Indeed, its content offers several positive and practical steps to protect screenwriters from the potentially disruptive effects of AI. In the Summary of the 2023 WAG MBA, under the header “Artificial Intelligence”, the WGA establishes four key measures in legislating the use of AI: it cannot write or rewrite literary material or source work (so cannot replace a writer’s credit); its usage cannot be required (but can be voluntarily chosen); its prior usage must be disclosed to screenwriters; and its training cannot exploit writers’ materials. On a pragmatic level, this is obviously a very positive result for the WGA, whose members’ livelihoods are significantly protected from the possibility of being marginalised by AI. Instead, clear boundaries are set whereby AI can be used to the writers’ advantage, but under the clear provision that it is with their consent and to their benefit. Moreover, the emphasis on transparency in terms of AI usage is also a constructive step in helping demystify the entire process.

However, in the sections indirectly related to AI its terms become more problematic. For instance, another notable consequence of the new agreement is that writers are entitled to a bonus residual dependent on the streaming figures of shows made primarily for streaming release. Under the header ‘Viewer-based Streaming Bonus’, the contract describes “a new residual based on viewership … with views calculated as hours streamed domestically of the season or film divided by runtime”. Thus, following this agreement, the streaming platforms will provide the WGA with increased data regarding exact streaming figures. While this would seem like a notable success, it is however undermined by the fact that it is “subject to a confidentiality agreement” and that the actual members of the WGA will only receive this information “in aggregated form”. While this section does not specifically mention AI or big data, its reliance on the calculation of streaming figures ensures that it is implicitly dependent on the use of such tools to calculate this data. As such, there is a stark contrast between the clear and restrictive guidelines set for AI usage in terms of the screenwriting process and its complete elision in terms of the viewership calculation process. Whereas the agreement therefore is seemingly effective in safeguarding the rights of screenwriters, in doing so it implicates itself in perpetuating the ruleless use of AI to monitor streaming platforms’ audiences on behalf of technology conglomerates.

“WGA Strike 6.21.2023 068” by ufcw770 is licensed under CC BY 2.0.

Whilst this may seem trivial it contributes to how the world today is organised in a radically different way than in the past. The proliferation in usages of big data and AI to analyse and model information about citizens has seismically altered what it means to be a human. As Gilles Deleuze (1992, p.5) notably stated, “we no longer find ourselves dealing with the mass/individual pair. Individuals have become ‘dividuals’, and masses, samples, data, markets, or ‘banks’”. Indeed, this has become so prevalent it is no longer even remotely shocking: in a letter to the UK parliament, the preeminent streaming platform Netflix revealed how they divide viewers into the categories of “starters”, “watchers”, and “finishers” when distributing viewing data (Netflix – Supplementary Written Evidence (PSB0069), 2019). [1] Thus, humans are relegated from being individual customers to numerical occurrences of a particular data set. What is notable is not that Netflix is doing this, but that they regard it as sufficiently obvious and unremarkable that they are willing to share this information openly. In fact, their algorithmic data about users is much more advanced, enabling their recommendation system to be so successful that 80% of content streamed is found via this engine, with the remaining 20% being found through search (Gomez-Uribe & Hunt, 2015, p.5). There is no doubt therefore that Netflix’s private use of AI is reliant upon analysing and categorising viewers with a specificity and sophistication far greater than that necessary for the comparatively infantile groupings of “starters”, “watchers”, and “finishers”. Individual viewers have been decompartmentalised into an almost incalculable number of data points and then reaccumulated into an interpretable mass data pattern.

Whilst the logical next step would seemingly be to analyse how exactly this recommendation system – as well as Netflix’s broader AI strategy – works, the lack of public information regarding the precise mechanisms of Netflix’s AI usage renders this an impossibility. Vitally, this impossibility – one not exclusive to Netflix but rather shared by every big technology company – is not an accident. In 2004, Steve Mann coined the term “sousveillance” to describe a human-centric alternative to surveillance reliant on watching-from-below rather than watching-from-above. However, whereas Mann focuses on this tactic in relation to combatting CCTV’s surveillance, the ensuing year’s technological development means that it can no longer be considered a pragmatic or effective method of resisting digital observation. As Finn Brunton and Helen Nissenbaum perceptively note, as well as “an asymmetry of power … the second asymmetry, equally important, is epistemic” (Brunton & Nissebaum, 2013, p.166). Not only are we therefore unable to stop or control how data about us is being used and exploited, but we are also often unaware even of when or how it is being gathered. Returning to Netflix, the simple act of watching an episode or film is rarely equated with the act of being observed and analysed, even though it is indeed this precise act providing the data which will be used to construct the consequent predictions and recommendations. Even once we do consider this, it is hard to conceive a realistic method of obstructing this: the notion of “sousveillance” is impossible against a faceless, directionless digital monolith. The double-handed anonymity inhered in the digital realm has therefore created a situation whereby its users are continuously visible to invisible algorithms.

The MBA and the Society

Admittedly, this may all seem a hyperbolic response to Netflix’s attempts to ensure they recommend the optimal content for each user – after all, if Netflix gets to increase their screen-time figures and users get to watch content they enjoy, everybody wins, right? In order to better consider this, it is necessary to introduce Kevin Haggerty and Richard Ericson’s notion of “the surveillant assemblage” (2003). They see this assemblage as a “potentiality, one that resides at the intersections of various media that can be connected for diverse purposes” (Haggerty & Ericson, 2003, p. 609). Thus, whilst Netflix’s algorithm itself is arguably a beneficial development, the issue is rather that it exists as a single strand in a mutually-reliant, interlocking network of similar strands, all of which are premised on the continuing observation of their users. Admittedly, Haggerty and Ericson see this development as positive, arguing it constitutes “a rhizomatic levelling of the hierarchy of surveillance” (Haggerty & Ericson, 2003, p. 606). However, considering the power imbalance analysed in Netflix’s use of algorithms, it is clear that their hope for the occurrence of a “rhizomatic levelling” of surveillance has not been borne out. Instead, the user is the single entity that has been unmoored from this assemblage, leaving them unprecedentedly vulnerable to not only individual instances of surveillance but also their exponentially cumulative impact. While in a sense it is true that today even the most powerful humans on the planet are exposed to unprecedentedly similar levels of surveillance, this should not obscure the reality that they nonetheless profit from this surveillance to a grossly disproportionate extent. To invent a hypothetical example, Netflix’s CEO may also be observed algorithmically when watching Netflix like any other user, but unlike any other user, the cumulative result of this surveillance will enrich him significantly by improving the company and thus making it more valuable and profitable. Thus, Netflix’s algorithm cannot be considered simply as a single entity but rather must be seen as a key element of the unfathomable algorithmic surveillance network it helps constitute. 

The result of this constant observation is that the very notion of control has been reversed. It is no longer a matter of reactively obstructing certain choices or options, but of proactively encouraging others. Observing this, James Brusseau concludes “the strategic requirement for control today is no explicit prohibitions, no blocked possibilities, no forbidden ways. There are only opportunities and temptations. Some lead you towards, others lead you away” (Brusseau, 2020, p.11). Netflix, for instance, does not need to worry about censoring certain films or series, when they can instead rely on their recommendation algorithm (responsible for 80% of content watched) to instead simply prioritise other, more suitable content. For the first time then, control has become the primary actor and human action only the secondary. Not just a chronological reordering, this radically shifts what it entails to be an individual: as Andre Seecamp and Jan Söffner argue (2024), the “human has morphed into the software’s user” (p.81). Whilst it is easy to welcome the effectiveness of algorithmic recommendations, in doing so, we also accept the loss of individuality they inhere. As each of us simply chooses to watch what has been calculated to suit our interests, our viewing patterns become a self-fulfilling prophecy: these interests remain highly predictable because there are no outside stimuli to alter the situation. Beyond just a specific issue concerning our Netflix choices, for example, our broader susceptibility to big data’s predictions means that we are losing the ability to freely choose without even realising this.

Naturally, if the nature of what it means to be human has changed so radically, the notion of the society that we together constitute has also undergone a seismic change. In particular, capitalism has become restructured around the central conceit of proactively influencing consumer’s choices and decisions. As Masa Galic and colleagues state, companies are increasingly “aiming to predict and modify human behaviour as a means to produce revenue and market control” (2017, p.24). Indeed, the term “surveillance capitalism” was explicated by Shoshana Zuboff as referring to a mode of capitalism reliant on ever-increasing data exploitation and resultant customisation (Zuboff, 2019). The WGA’s deal assumptively inhering the use of big data to track Netflix’s users thus becomes indicative of the extent to which AI and big data have prevalently seeped into today’s culture and society. The fact that even a union, traditionally the defenders of the individual workers, accepts its necessity as a silent precondition reveals quite how foundational it is in today’s market. There is thus a seemingly inescapably symbiotic relationship between this new society and its inhabitants: the latter continuously constitutes the former, while the former constantly shapes the latter. In other words, as we unknowingly sacrifice our autonomy, the society we exist in expects even less autonomy of us, pushing us to sacrifice yet more of it – and so the negative spiral exists. 

Conclusion

Dystopian as it may seem, it is therefore vital to bear in mind the broader implications concerning big data use innate in agreements such as the WGA’s new MBA. On a pragmatic level, it certainly seems to provide useful restrictions and guidelines on AI usage, which should help protect the livelihoods of the entire profession. This does not mean that on an ideological level, its more insidious content should simply be ignored. Its presumption and implicit advocacy for the continued use of big data to track and analyse streaming platforms’ users further inheres big data as a foundational aspect of modern life. This is not intended to disparage or criticise the agreement, but rather to assert the importance of continued awareness of the deep pervasiveness of digital surveillance today – and the fact that it is a pervasiveness we are all to some extent complicit in, however invisible it may seem. 

Notes:

[1] Netflix shall henceforth be used as an example of algorithmic surveillance in the contemporary world, but only because they exist in a similar industry to the WGA – not because other companies, including Google and Facebook for example, are using algorithms any less than them.

References:

Bailey, C., & Honderich H. (2023, July 15). SAG strike: Why Hollywood actors have walked off set. BBC. https://www.bbc.com/news/world-us-canada-66208226.

Brooker, C. (Writer) & Pankiw, A. (Director). (2023, June 15). Joan Is Awful (Season 6, Episode 1) [TV series episode]. In C. Brooker (Executive Producer), Black Mirror. Netflix.

Brunton, F., & Nissenbaum, H. (2013). Political and Ethical Perspectives on Data Obfuscation. In M. Hildebrandt & K. De Vries (Eds.), Privacy, Due Process and the Computational Turn, 164–188. Routledge.

Brusseau, J. (2020). Deleuze’s Postscript on the Societies of Control: Updated for Big Data and Predictive Analysis. Theoria: A Journal of Social and Political Theory, 67(164), 1-25.

Deleuze, G. (1992). Postscript on the Societies of Control. In October, 59, pp 3-7. 

Galic, M., Timan, T. and Koops, BJ. (2017). Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation. Philosophy and Technology, 30, 9-37. 

Gomez-Uribe, C. A., & Hunt, N. (2015). The Netflix recommender system: Algorithms, business value, and innovation. ACM Trans. Manage. Inf. Syst. 6 (4), Article 13, 1-19. https://doi.org/10.1145/2843948. 

Haggerty, K., & Ericson, R. (2003). The Surveillant Assemblage. British Journal of Sociology, 51(4), 605–622. 

Mann, S. (2004). “Sousveillance”: Inverse Surveillance in Multimedia Imaging. Multimedia ’04: Proceedings of the 12th Annual ACM International Conference on Multimedia. https://dl.acm.org/doi/10.1145/1027527.1027673 

Patten, D. (2023, September 26). WGA Chiefs Ellen Stutzman & Meredith Stiehm Q&A: “Transformative” Deal For Hollywood, Solidarity With SAG-AFTRA & The AMPTP’s “Failed Process”. Deadline Hollywood. https://deadline.com/2023/09/writers- guild-leaders-interview-end-of-strike-1235557011/ 

Seecampe, A., & Söffner, J. (2024). A Postscript to the “Postscript on the Societies of Control”. SubStance, 53,(2) 75-85. 

UK Parliament. (2019). Netflix—supplementary written evidence (PSB0069). The Future of Public Service Broadcasting Inquiry. PSB0069 – Evidence on Public service broadcasting in the age of video on demand 

Wilde, O. (1891). Intentions. UK, Methuen and Co, Ltd. 

Writers Guild of America. (2023, September 25). Memorandum of Agreement for the 2023 WGA Theatrical and Television Basic Agreement. https://www.wga.org/uploadedfiles/contracts/2023_mba_moa.pdf 

Writers Guild of America. (2023, September 25). Summary of the 2023 WGA MBA. https://www.wga.org/contracts/contracts/mba/summary-of-the-2023-wga- mba 

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.  

[PDF]