Winnipeg School of Communication

Generation AI Coda

The Other Environmental Crisis

Jennifer Reid   /   August 3, 2022   /   Volume 3 (2022)   /   Features

CODA: A Political Question of the First Order

This three-part exploration of contemporary AI began with the assertion that AI is the most pressing environmental issue that we face, after the climate crisis. Far from suggesting that the project of AI be stopped or scrapped, the discussion has brought together research from diverse fields, highlighting the sociocultural, neurobiological, and relational problems unleashed by our pursuit of AI in its many forms. Its synthesizing narrative exposes the schizoidal vacillation between naive exuberance and greedy cynicism guiding AI development and deployment. In gesturing towards the growing lack of clarity as to what problems AI is proposed to solve, it also suggests that AI itself has become a problem that more AI is demonstrably failing to solve. AI seems to represent, rather, a human fight against the human experience of life itself, as our increasingly proscribed and monolithic built environment goes head-to-head with the diversity of natural environments that are the home of our equally diverse humanity. AI is at the forefront of a race for resource concentration in the hands of a few, who have seen the ways forward for a system of homogenized integration of biological and artificial neural networks, in order to secure a next-level extraction and control of Earth-energy in all its forms. The author stands with Indigenous and other voices, including many AI researchers, who see the dangers lurking in the analogy between the dynamics of the current AI-powered technostructure and the commercially-driven colonialisms of the past, who see the AI threats to human personhood, communities, and environments, and who call for a new approach and a new set of protocols to be established for the responsible and productive development and deployment of AI.1

The false philosophy that compels us to operate in the strange tenses of the historical future of AI is incompatible with the wisdom of diverse ways of knowing.

As Aleksandr Solzhenitsyn reminds us, “historic events always swoop down unexpectedly.”2 In other words, there is no human-directed mechanism that can absolve us of responsibility as we flow through space and time: despite appearances, there is no unalterable trajectory or teleology to which we are beholden where it comes to the direction of our lives, either individually or in community. If we are not alert to our environments and the environments we create, however, we will be swallowed up suddenly and determinedly, as if by the imaginary maw of history. Put another way, “there is absolutely no inevitability as long as there is a willingness to contemplate what is happening”3—or, as Hannah Arendt suggests, “to think what we are doing.”4 Hindsight is not good enough; neither is mere contemplation without some form of informed proaction rather than historical reaction. As this discussion has endeavoured to demonstrate, there is both a backlog of evidence and an adequacy of emergent indicators to show that we have taken a wrong turn in our relationship with AI. The false philosophy that compels us to operate in the strange tenses of the historical future of AI—as an inalienable process—is incompatible with the true spirit of inquiry, the wisdom of diverse ways of knowing, and is antithetical to free personal, political, and social will. Case in point, chasing after “free speech,” “freedom of expression,” and the expulsion of “fake news” in the context of a fundamentally unfree, behaviourally controlled, technological nexus that conditions humans away from their higher humanity towards their lower-level functionality, enshrined in machinic form, is patently absurd. This phenomenon is the error of true “technological determinism”: the collective delusion produced by ideological attachments to irrelevant concepts of “agency” in the face of technologically-driven transformations.

Offloading human ethical and social responsibilities onto an agentless army of self-directed algorithms does not improve human physical or mental health, solve poverty, ameliorate living conditions, create equality, preserve diversity, or save “the environment.” Machine states beget machine states: what is broken in us—already and anew—perpetuates exponentially and unceasingly under the conditions of the seeming boundless energy of our AI companions, from medical devices like PARO to social media platforms like Twitter, Facebook, and TikTok. This is true, at least, until the fossil fuel runs out and we can no longer power up our devices, or the biosphere expires while we distractedly hand out ha-has, hearts, and crying-face emojis to unseen interlocutors behind the illusory safety of the screen-glass. We must take to heart the admonitory aphorism from George Eliot’s 1876 Bayesian fiction, Daniel Deronda, that “a great deal of what passes for likelihood in the world is simply the reflex of a wish.”5 It is up to all of us to define and devise a new set of probabilities for ourselves.

Chasing after “free speech,” “freedom of expression,” and the expulsion of “fake news” in the context of a fundamentally unfree, behaviourally controlled, technological nexus is patently absurd.

The oversimplification that proposes scientific and technological change as a natural law tied to biological and evolutionary principles obscures its embeddedness in highly complex, socially-driven phenomena. As a corollary, the value, use, and acceptance of such innovations are construed as apolitical. For example, Google’s 2021 “Slogans for the Class of 2030” project with Canadian author Douglas Coupland announces a scientific and technological future that is already here, existing quite apart from any ethical considerations or necessities. This co-option of the historical future by Google, in its quest to naturalize AI and reconcile human being to its commercial vision of AI deployment, plugs into a narrative pathology of spatiotemporal management that posits a separation of humans from the ethical dimension of life, and the environments that support it. The control of this narrative is key for creating and maintaining dominance.

The ideology of AI proposes human being as a kind of abstract performativity within an inevitable ecology of machines and their properties.

Championed by digital industrialists and many researchers alike as a support for creativity, self-expression, democracy, and empowerment, the ideology of AI proposes human being as a kind of abstract performativity within an inevitable ecology of machines and their properties. In this context, the question of who is setting the AI agenda and examination of the demonstrable sociocultural, neurobiological, and relational effects of AI are both equally urgent and oddly subterranean. For example, while AI-powered social media platforms stand as the quintessential emblem of this ideological confluence, they have not ushered in an era of actualized democracy, personal freedom, and self-agency. Instead, as Ronald J. Deibert at the University of Toronto’s Citizen Lab observes, they have promoted a disenfranchised neo-serfdom. In his 2013 book, Black Code: Surveillance, Privacy, and the Dark Side of the Internet, Deibert remarks that

it is important to remind ourselves of the political economy of social media: if social media seems like ‘imagined communities’ … the members are more like serfs than citizens, the users both consumers and product. Social media might thus best be described as epiphenomenal public spheres: while we may increasingly use these platforms for political purposes, politics is only a by-product of their intended purpose, and one that is highly constrained by terms of service that are outside the direct control of users.6

The renunciations involved in being accepted into such digital environments as a participant remain a serious political issue. They have only expanded in their scope and scale in the intervening years, and include voluntary abandonment—not only by means of legal contract, but by means of operational architecture—of the many rights and freedoms that democratic societies purport to uphold.7 This political economy has exacted a high price that extends beyond asymmetries of power and subversion of agency to human fundaments arising from nature. As Jean-Jacques Rousseau warned in his 1762 publication, The Social Contract, “to renounce one’s liberty is to renounce one’s essence as a human being, the rights and also the duties of humanity. For the person who renounces everything there is no possible compensation. Such a renunciation is incompatible with human nature, for to take away all freedom from one’s will is to take away all morality for one’s actions.”8 The rule and management of people in these digital environments by fairly crude AI has been made possible through the willingness of individuals to give up a range of freedoms that are not simply abstract concepts of polities but rooted in physical embodiment. The bodily phenomena of language, thought, and action, their complex interrelationship at the biological level, their genetic and epigenetic interaction within the body and between bodies, their role in memory, identity, and cognition itself, are hazarded in this transaction.

As we move more profoundly into the space of AI as superior to humans, we pay an increasingly high tribute for passage into its territory.

The brain-computer analogy, which extended itself into the metaphor of “artificial intelligence,” coined in 1955,9 provides the impetus for amazing and exciting explorations into the subliminal operations and functions of biological brains. Similarly, our experience of life on Earth, which gave rise to a host of imaginative analogical extensions, spurs on scientific space exploration, pursuit of human-habitable worlds, and “alien” sentient beings. But analogy is such a powerful cognitive tool that it can also give rise to a kind of forgetfulness, such that we begin believe the conceptual blends we produce in and through it as realities, mistaking the outcome of an interpretative process of learning and understanding for “the real thing.” By mishandling our relationship to the tool of analogy, and the analogy “artificial intelligence” itself, we enter into a mistaken logical equivalence that stands as reality and compels us to use our deepening knowledge of biological brains as a proof for the superiority of artificial brains. This increasing blindness as to the metonymic relationship between brain and computer opens the door to AI as an ideology and an inexorable fate for humanity.

Just because various computational properties exist, like AI, machine learning, and algorithms, and because they have grown to do massive calculations beyond our abilities, does not mean that these properties—rising as a whole in the acronym “AI”—are de facto superior. Deep down we know this, but that knowledge, that insight, that perspectival distance, is rapidly slipping away. It is worth remembering that AI was initially a reverse-engineering project in relation to “the brain.” As we move more profoundly into the space of AI as superior to humans, we pay an increasingly high tribute for passage into its territory. We are rewiring our brains, removing access to the pathways and functions that are, in fact, the engine of evolution. We do not evolve “because the hammer.” We evolve because we have a particular earth-bound embodiment that perceives the type of the hammer already existing in the environment and make it part of our human lifeworld, over which we have a creative and ethical responsibility. Far from delivering a more evolved future for human being, our current entanglement with AI promises a devolution for all humanity—socioculturally, neurobiologically, and relationally—if we do not act now.

In and through the crisis of AI we locate an ironic return to “natural philosophy” as an essential mode of inquiry for human being individually and in community. By the mid-twentieth century, philosopher Hannah Arendt, in The Human Condition (1958), found it necessary to remind her readers that “the earth is the very quintessence of the human condition,” and that “earthly nature, for all we know, may be unique in the universe in providing human beings with a habitat in which they can move and breathe without effort and without artifice.” For her, the 1957 launch of Sputnik—the “surprise” Cold War moment that sent the USA into spastic convulsions of STEM activity, resulting in NASA and the original moonwalk10—was a concrete example of the paradoxical human desire to escape the earth and, thereby, the human condition. She notes with alarm the political and scientific energy of the day poured into “making life ‘artificial’,” propelled by a “desire to escape from imprisonment to the earth” that was not new, but which had already made itself felt in popular culture. As she tells us, scientists of the 1950s believed that the cosmic and immortal “future man” would emerge within a hundred years; whoever it turned out to be, Arendt believed it would emerge pre-programmed for “rebellion against human existence,” based as it was on a longstanding desire to exchange the life of earth for an artificial existence made by humans for humans.

This concern, about the balance between the artificial and the natural in relation to the human condition, is just as critical today in our AI-powered present as it was when Arendt raised it almost seventy years ago, in the wake of the atomic bomb and Sputnik. We leave unsolved this major crux because, through our actions, we defer answering what Arendt considered the only necessary query: “whether we wish to use our new scientific and technical knowledge in this direction.” We can no longer wait. Rather, we must take up the challenge this question poses to our ways of knowing about and experiencing earthly human existence. There is yet a further requirement that Arendt identified: we must approach the challenge mindful that “this question cannot be decided by scientific means; it is a political question of the first order and therefore can hardly be left to the decision of professional scientists or professional politicians.”11 Neither can we afford to abdicate responsibility in favour of either industry or complacency—our default positions up to and including the year 2022.

We are at a turning point in relation to AI as a parallel environmental crisis to the global climate crisis. The crisis cannot be solved by sole recourse to technical solutions or by technicians alone, but by ordinary people taking a stand and getting involved in asking the tough questions. From meta-surveillance via mobile devices, the Internet, social media platforms, satellites, and drones, to the race to occupy space, AI research, development, and deployment is implicated in every aspect of human life. As the push for deeper penetration of AI-powered integrated platforms and devices into the totality of an 8-billion plus “market” accelerates, the earth-bound relationship between humans and their environments becomes simultaneously more tethered and more precarious than ever before.

On the one hand, the confluence of the twin environmental crises and their technologies have been a boon to “geospatial thinking”—industry spin suggests that “a heightened geospatial awareness of our place in the world is driving us to rebuild and strengthen our relationship with our planet and with each other” for “sustainability” and “profitability.”12 On the other hand, this awareness is shaped and supported by the material realities of the physical substrate on which it depends. The rush for resources and cheap production of electricity to support the digital infrastructure perpetuates disparity between places and people, and provides the impetus for environmentally destructive behaviour. For example, a 2019 UN digital economy report observes that “due to the large electricity requirements to cool the [digital] data centres, locations with cold climates, and abundant and reliable power supplies are the most attractive,” putting many global regions and developing countries at a disadvantage.13 The experience of extreme tethering in the midst of Earth-escape comes in the form of the “Internet of Things” (IoT): it is estimated that “by 2025, an average connected person in the world will interact with IoT devices nearly 4,900 times per day, or the equivalent of one interaction every 18 seconds.”14 At the same time, we continue to wrap the planet in wires: in 2021, individual, operational, fiber-optic undersea cables numbered around 436, at a combined length of 1.3 million kilometres.15 Looking above to the heavens, the UN Office for Outer Space Affairs’ database records nearly 13 000 objects launched into space as at May 2022,16 and while there are conflicting reports on the number of operational artificial satellites,17 ESRI’s Satellite Map designates 13 033 satellites of 19 109 found in its records as “junk.”18 Added to “sky pollution” from “artificial light and satellite networks,”19 this space-junk floats alongside any past, present, or future environmental gains made in off-Earth geophysical monitoring, disaster management, and emergency response.

From the anthropocentric perspective, no greater symbol of this fraught web of technological enhancements and reversals in relation to earth-bound physical embodiment and artificial intelligence can be found than in the reality of lethal autonomous weapons, or “slaughterbots.” Neither remaining contained in “the far-future,” nor in science fiction, “weapons which can autonomously select, target, and kill humans are already here”; “the era in which algorithms decide who lives and who dies is upon us,” declares Lethal Laws, a human-rights organization fighting against autonomous weapons.20 While the UN’s Group of Governmental Experts first met to tackle the question of slaughterbots in 2018, as of December 2021, the UN itself could come to no agreement on the matter; meanwhile, states around the world continue to “pour billions into autonomous weapons research.”21 As AI researcher Stuart Russell exhorts at the close of the short film Slaughterbots, “the window to act is closing fast.”22

It is abundantly clear that instruction in “media literacy,” age-based “screen-time” prescriptions, calls for “moderation” in digital media use, idealistic arguments about equitability and diversity, and endless reports on the state of the art by academic researchers, policy analysts, human rights organizations, and committees of the UN are inadequate in the face of current AI development and deployment. Meaningful action is what is required. Some of this action will seem expensive in the short term.

It is up to all of us to define and devise a new set of probabilities for ourselves.

What Is To Be Done?

At the centre of this call to action is another question that must be continually addressed by any well-formulated and well-executed practice: why does AI+automation always require more “you” in exchange for less control, less freedom, less agency, and less rights protection? Say no to forms of AI that replace essential human functions in healthcare and education: there can be no genuine replacement for human caregiving and the personal communication of wisdom. Create the means to support all modes of human caregiving and teaching for the sake of the intrinsic good that these functions have had throughout human history, and which have shaped our evolution. Recognize that children gain access to story and empowerment through natural language, as well as the deep linguistic, social-emotional, and environmental awareness that comes with it. Say no to the hyperintegration of biological and neural networks. Smash the orthodoxy of the human-brain analogy, remembering which one is the most plastic and most full of value and possibility.

Be aware—not in awe—of new research and developments in technology: be active in making inquiries at the source. Reject the trend towards deeper device integration and more invasive, technologically-defined measures of personal identity and authenticity, including biometrics and certain types of multifactoral authentication. Place limits on surveillance and related spinoffs. Create and sponsor local groups that involve all members of the community in technological decision-making; give them freedom and tools outside the straightjacket of institutionalized STEM education. Look for the criminal and civic codes that need revision and updating for the protection of individuals and communities. Challenge legislators to seriously regulate digital industries. Demand reciprocity. Repudiate the idea that the digital environment is the normative environment for humanity. Get involved with the Algorithmic Justice League.23 Join the fight against autonomous lethal weapons.24 Turn to Indigenous knowledge-keepers, thinkers, and artists—who see other ways forward for AI—as a first step in AI development and deployment.25 Collaborate with the Winnipeg School of Communication.26 Contribute to our journal, Winnsox.27

The time is now to join the environmental movement, and put a stop our constant trend of adoption and adaptation on the losing end of media participation. Let’s take our ethical and creative responsibilities to heart and out into the wild.

What we do with—and for—AI is, after all, a political question of the first order.

Citation:

Reid, Jennifer. “Generation AI Coda: The Other Environmental Crisis.” Winnsox, vol. 3 (2022).

ISSN 2563-2221

Notes

  1. For more, see Jason Edward Lewis, ed., Indigenous Protocol and Artificial Intelligence Position Paper (Honolulu: Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR), 2020), doi: 10.11573/spectrum.library.concordia.ca.00986506. ↩︎
  2. Aleksandr Solzhenitsyn, The Gulag Archipelago 1918–1956: An Experiment in Literary Investigation [volume 2], trans. Thomas P. Whitney (New York: HarperCollins Publishers, 2007 [1974]), p. 330. ↩︎
  3. Marshall McLuhan and Quentin Fiore, The Medium is the Massage: An Inventory of Effects, produced by Jerome Agel (Berkeley: Gingko Press, Inc., 1996 [1967]), p. 25. ↩︎
  4. Hannah Arendt, The Human Condition [2nd edition] (Chicago: University of Chicago Press, 2018 [1958]), p. 5. ↩︎
  5. George Eliot, Daniel Deronda, ed. Graham Handley (Oxford: Oxford University Press, 1998 [1876]), p. 82. ↩︎
  6. Ronald J. Deibert, Black Code: Surveillance, Privacy, and the Dark Side of the Internet (Toronto: Penguin Random House, 2013) p. 107. ↩︎
  7. A hopeful perspective of recent legislative activity in Europe and in the U.S.A. suggests that a modicum of governmental self-awareness on this theme has been reached. In early 2022, the EU achieved what it characterizes as “political agreement” on two key legislative measures, collectively known as the Digital Services Act (DSA) package, subject to formal approval. This package consists of the Digital Markets Act (agreement reached on 23 March 2022), and the Digital Services Act (agreement reached on 23 April 2022) (see European Commission, “Digital Markets Act: Commission Welcomes Political Agreement on Rules to Ensure Fair and Open Digital Markets,” https://ec.europa.eu/commission/presscorner/detail/en/ip_22_1978, and idem, “Digital Services Act: Commission Welcomes Political Agreement on Ensuring a Safe and Accountable Online Environment,” https://ec.europa.eu/commission/presscorner/detail/en/ip_22_2545 (accessed 18 June 2022)). The DSA package is intended “to create a safer digital space where the fundamental rights of users are protected and to establish a level playing field for businesses.” (European Commission, “The Digital Services Act Package,” https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (accessed 18 June 2022)). In 2021, the US vowed to tackle the issue of “Big Tech” through regulatory legislation. For accessible overviews, see The Associated Press, “EU Deal Targets Big Tech over Hate Speech, Disinformation,” CBC News, 23 April 2022, https://www.cbc.ca/news/world/hate-speech-big-tech-european-union-1.6428784, and Kris Reyes, “Year of Reckoning for Big Tech: How U.S. Lawmakers Plan to Rein in Companies Like Facebook and Google in 2022,” CBC News, 31 December 2021, https://www.cbc.ca/news/business/big-tech-regulation-united-states-social-media-1.6295055 (accessed 23 April 2022). ↩︎
  8. Jean-Jacques Rousseau, The Social Contract, trans. Susan Dunn, pp. 151–254, in Jean-Jacques Rousseau: The Social Contract and the First and Second Discourses, ed. Gita May and Susan Dunn (New Haven: Yale University Press, 2002), book I, chapter IV, p. 159. ↩︎
  9. While John McCarthy is often credited with coinage of the term “artificial intelligence” in 1956, McCarthy was just one of four well-known individuals in the computing community who co-authored the August, 1955 proposal, leading to the famed 1956 Dartmouth Summer Research Project on Artificial Intelligence, in which the phrase first appeared. See the initial proposal co-authored by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: August 31, 1955,” summarized in AI Magazine, vol. 27, no. 4 (Winter 2006): pp. 12–14. ↩︎
  10. Vaclav Smil, “Russia and the USA: How Things Never Change,” in Numbers Don’t Lie: 71 Stories to Help Us Understand the Modern World (New York: Penguin Books, 2021), pp. 86–89; p. 88. But see 2018 math, science, and literacy comparisons (“Is the US Really Exceptional?” at p. 60). ↩︎
  11. Arendt, The Human Condition, quotations at pp. 1–3. ↩︎
  12. ESRI, “Geospatial Thinking,” https://www.esri.com/en-us/geospatial-thinking/overview (accessed 16 May 2022); see also their social media app, “StoryMaps.” ↩︎
  13. United Nations Conference on Trade and Development (UNCTAD), Digital Economy Report 2019—Value Creation and Capture: Implications for Developing Countries (New York: United Nations Publications, 2019), p. 12. ↩︎
  14. UNCTAD, Digital Economy Report, p. 7. ↩︎
  15. TeleGeography, “Submarine Cable 101,” Submarine Cable Frequently Asked Questions, https://www2.telegeography.com/submarine-cable-faqs-frequently-asked-questions; see also their Submarine Cable Map, https://www.submarinecablemap.com/ (accessed 18 June 2022). ↩︎
  16. United Nations Office for Outer Space Affairs (UNOOSA), “Online Index of Objects Launched into Outer Space,” https://www.unoosa.org/oosa/osoindex/search-ng.jspx?lf_id= (accessed 16 May 2022). ↩︎
  17. See summary in Nibedita Mohanta, “How Many Satellites Are Orbiting Earth in 2021?”, Geospatial World Blog, 28 May 2021, https://www.geospatialworld.net/blogs/how-many-satellites-are-orbiting-the-earth-in-2021/ (accessed 16 May 2022). ↩︎
  18. ESRI, Satellite Map, https://geoxc-apps2.bd.esri.com/Visualization/sat2/index.html (accessed 16 May 2022). ↩︎
  19. see UNOOSA, Report of the Committee on the Peaceful Uses of Outer Space: Sixty-fourth Session (25 August-3 September 2021) (New York: United Nations Publications, 2021), chapter. 1, § E, item 17, p. 4. ↩︎
  20. https://autonomousweapons.org/ (accessed 11 May 2022). ↩︎
  21. James Dawes, “UN Fails To Agree on ‘Killer Robot’ Ban As Nations Pour Billions into Autonomous Weapons Research,” The Conversation, 20 December 2021, https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616 (accessed 17 May 2022). ↩︎
  22. Stewart Sugg, “Slaughterbots” [2017], YouTube, 17 October 2019, https://www.youtube.com/watch?v=O-2tpwW0kmU, at 7:09-7:42 (accessed 2 May 2022); see also CBC Ideas, “Quit Using The Terminator As An Example of AI Gone Wrong, Argues BBC Reith Lecturer,” CBC Radio, 10 January 2022, https://www.cbc.ca/radio/ideas/quit-using-the-terminator-as-an-example-of-ai-gone-wrong-argues-bbc-reith-lecturer-1.6309630 (accessed 2 May 2022); follow links to Stuart Russell’s 2021 BBC Reith Lectures, “Artificial Intelligence and Human Existence,” aired in two parts on CBC Ideas. ↩︎
  23. https://www.ajl.org/. ↩︎
  24. https://autonomousweapons.org/take-action/. ↩︎
  25. https://www.indigenous-ai.net/. ↩︎
  26. https://winnsox.com/contact. ↩︎
  27. https://winnsox.com/submissions. ↩︎