Generation AI Part 1
The Other Environmental Crisis
Introduction
After the climate crisis, the most pressing environmental issue affecting human life on Earth is the rapid development and mass deployment of a vast array of digital technologies powered by Artificial Intelligence (AI), built by humans for humans, and run on the twin fuels of planetary and human energy. The 2020s have seen a massive push towards the naturalization and domestication of AI by a relatively small number of global megacorporations, who, enabled by public and private funding across institutions and industries, are actively steering a common AI future for humanity. This push is an effect of the problem of AI itself, underscored by a tangled mass of ambivalent research and development, competing interests, ambiguous application, poorly understood historical connections, and underdeveloped sociocultural and political awareness. At risk are the valuable cognitive and behavioural assets that have contributed to the complexity and durability of human traditions, languages, and cultures.
Indigenous scholars have been especially alert to the risks inherent in the AI project. As Ashley Cordes observes, “AI is largely framed in consumer industries as tools or products to make life’s wide range of tasks easier, quicker, and often what is perceived as better. Since AI is trained with data to do things such as reason, predict, and represent, data become the archives of profound significance and vulnerability,” the gathering and use of which is often “not commensurable with various Indigenous understandings of data as sacred and necessary for survivance and self-determination.”1 Hemi Whaanga concurs, emphasizing that “knowledge and information are the intellectual capital generated by communities, tribes, and knowledge holders over multiple generations,” which, for Indigenous peoples, is at the heart of “a holistic, dynamic, innovative, and generative system that is embedded in lived experience.” AI, as an expression of the “underlying goal to create a global village based on cultural, social, political, and economic homogenization,” destabilizes such systems in and through its operations.2
These warnings highlight the degree to which current work on AI and resultant technological iterations are representative of particular cultural interpretations of the human person, the brain, and society. In doing so, they invoke the biological and environmental contexts that support the dignity and diversity of human life around the world, with special emphasis on the morphological impact of human-built systems and their interactions over space and time. Analysis and assessment of this impact is predicated on the assumption of an intimate and knowable interrelationship between the universal and the relative in anthropological terms. If it is possible that the human experience of and relationship to reality “depends more on how our bodies are structured to perceive than on the object[s] of perception” themselves,3 then the way in which AI is designed to interact with those structures, as a universally applied, human-built addition to a diversity of relative human biopsychosocial and environmental contexts, requires more abundant scrutiny and regulation across the board.
Broadly speaking, the problem of AI as a human environmental crisis may be divided into three interdependent layers: the sociocultural, the neurobiological, and the relational. By way of identifying the challenges and opportunities before us in tackling this problem, this three-part article series explores each of these layers in turn.
If the inner layer of the bid for the naturalization and domestication of AI is neurobiological, the outer layer is sociocultural. Part 1, “Saving Celeste,” examines the Google x Douglas Coupland collaborative project, “Slogans for the Class of 2030,” in relation to an industry-led imperative to hook AI into every aspect of human living. The project and its accompanying material serve as a handy catalogue of prevailing historical and contemporary social and cultural arguments in support of unimpeded and unquestioned commercial AI development and deployment. The story of Celeste, the child-mascot featured in Google’s AI advertising campaign, unfolds as a triumph of the cult of youth over age, of scientific and evolutionary progress, of successful interspecies integration, and of AI-powered human potential as a proxy for profitable commercial exploitation of the human lifeworld through the cooption of language and culture by machine learning algorithms.
Part 2, “Redefining Success,” delves into the neurobiological research, where what is known about the workings of the brain intersects with a wide range of ambitions in the field of AI, simultaneously conditioning expectations for and steering the direction of AI development and application. Far from the lofty, holy-grail pursuit of an AI that is independently and convincingly human, over the last decade, work in a wide range of disciplines and their sub-fields has demonstrated that, at the neurobiological level, successful AI does not have to be good: humans just need to be that little bit worse. Change the definition of AI success on this basis, and a new world of opportunities opens up with wide-ranging possibilities. In its game-changing magnitude, this fact rivals the nineteenth century’s discovery of the method of discovery. This redefinition of AI success is the very foundation for the runaway digital industrialism running parallel to and magnifying the disastrous environmental effects of the previous Industrial Revolutions.
Part 3, “I Have Studied Language, Behold How Humane I Am,” rounds out the discussion by analyzing the relational aspect of AI use and abuse as it meets with the human need for connection and communication. This layer, where the sociocultural and the neurobiological meet, is the encompassing one that reveals the dangers of the current generation of AI and AI research and development across the board. From the secret creation and tracking of individual “digital footprints” and “emotional fingerprints” across devices and applications for the purpose of building and acting on mental health profiles, to working backwards from words and images on social media posts to genes and ancestry, the weaponization potential of AI against the person via natural language processing and image decoding is well under way. Positive selection for AI-compatible human traits is an undercurrent in this area of AI research and development, in which the common trope of more and better AI interaction, from mechanical robots to software chatbots, posits an AI-powered world as the normative environment for human being. It is against this ecological and ideological backdrop that individuals are being evaluated for domestication and domesticability on a coevolutionary path with AI.
The tripartite discussion concludes with “Coda: A Political Question of the First Order,” which suggests ways forward for thinking through and acting on this human environmental crisis. We are called to move away from the technologically and evolutionarily superstitious and deterministic all-or-nothing, inevitability-approach to AI development and deployment, and to step, individually and in community, into our creative, ethical, and political power for the active shaping of AI in the present—for our common future.
PART 1: Saving Celeste
Slogans for the Class of 2030
In 2021, Google and Canadian author Douglas Coupland collaborated on the AI project, “Slogans for the Class of 2030.” Google sought to answer the question, “how can future generations use Artificial Intelligence to unlock their creative potential?”. For the project, Coupland handed over his printed oeuvre, totalling some 1.3 million words, matched by Google with a contribution of the same number of words from social media posts. The data sets were then combined using a specific machine learning (ML) algorithm (or bot) to generate slogans, selected and finished by Coupland. The outcomes were therefore derived from linguistic sequencing patterns, predicted from syntactic and semantic content present in the data sets.4 Coupland’s own voice was argued to have been learned by the machine such that the results fit with his textual language patterning and resemble his prior work in artistic sloganeering, as in his travelling “Slogans for the 21st Century” exhibition (2011–2014).5 The upshot of this experiment, according to the collaborators, is the promise it holds out for AI as a support of the individual, affording “new ways of discovering what’s inside us” and offering “a whole new league of determining individuality.”6 The reality of this specific mode of determination, however, is ambivalent. As Cathy O’Neil demonstrates at length in Weapons of Math Destruction, algorithms, like all other forms of artificial intelligence, encode all the biases, positive and negative, of human beings.7 Coupland’s own commentary on the project reveals significant cautions and limitations. “Will it be possible in the future,” he asks, “to create a Doug App, so to speak? Possibly. But why would you want my app when you could become your own app, too?”. AI also holds out the promise of the stratification and homogenization of language within a frightening cul-de-sac of solipsistic recursivity on the same basis: “one thing I truly believe about the future,” says Coupland, “is that in the future we will all be speaking with ourselves, in whatever means it takes.”8
Ostensibly, the “Slogans” project tests the specific ability of an AI to produce novel texts with conceptual, linguistic, and artistic merit—and therefore social and cultural value—from machine learning by way of recombination. As Coupland points out, this method-to-outcome relationship is “no different than the text recombining of the Futurists in the 1920s, or the cut and paste text techniques pioneered by William Burroughs in the 1950s and 1960s.” “For that matter,” he continues, “one might also include the Surrealists who believed that the juxtaposition of the everyday could reveal previously unseen buried emotional truths.”9 To this synoptic history of modern literary techniques may be added more than 60 years of digital poetry, where natural and artificial language meet with recombinatory force.10 There is nothing new in accessing recombinatory techniques to create text; even if machine-generated, they are culturally familiar. In addition, the distinction between human author and machine author has become more blurred over time: text autogenerators are ubiquitous, necessarily avail of techniques of recombination, and have already changed the way people use and think about written language. Even bot poetry is not the avant-garde outlier it once was. It is now accepted as a valid form of artistic—if not entirely literary—expression, and has achieved the status of a specialized genre.11
The preponderance of human-generated text of all genres across all media—arising from a panoply of compositional methodologies—demonstrates that AI is not required to create it.
In 2021, there is nothing particularly cutting edge about the concept of a “creative bot,”12 but the degree of penetration of such bots into all areas of language production is growing apace. A host of AI-powered tools like Grammarly and Anyword claim to provide their users with autogenerated text for a range of writing scenarios. According to these companies, due to the content-soaked state of social media, relentless expectations for continuous content feed, and lack of time, creativity, ability, and/or confidence across writing contexts from emails and university essays to advertising copy, human text composition is increasingly inefficient and ineffectual. Anyword, for example, churns out various genres of “AI-optimized” ad copy based on an algorithm claimed to have been trained on $250 million USD of “ad spend.”13 A published testimonial on the Anyword website declares, “Stop thinking! This is the best thing that happened to my team in a while! Makes post text something we barely need to think about!”.14 A simple Internet search using a distinctive phrase from their sample ad-copy for an espresso maker reveals the direction of crude communicative redundancy in which this sort of tool takes the user, offering up a series of phrases recycled from various corners of the World Wide Web.15 Meanwhile, the preponderance of human-generated text of all genres across all media arising from a panoply of compositional methodologies demonstrates that AI is not required to create it. In this sense, the “Slogans” exercise could be considered old-fashioned, and even redundant. Where, then, does the real difference lie?
The real difference lies in the form of AI with which Google wants us to merge. Stripped of their colourful visuals, a selection of slogans generated by the project, below, gestures towards the interconnected range of ambitions for this generation of AI:
We are here because we want technology to happen.
Time is a beautiful thing.
We can think the same way.
Memory is the most fundamental capacity of all.16
The project’s combination of strategies represents the relatively easily achieved machine-learning outcomes providing foundational support for newer and harder to achieve ones. Easy to do things, such as calibrate a bot to master the formulaic or generic conditions of slogans and aphorisms, which are structurally identical to a simple declarative sentence of one proposition, or to create rules for basic conceptual and lexical correspondence and congruence among selected semantic domains and networks,17 which can be reduced to geometrical representation, as for example in Voronoi tessellation,18 represent the more comfortable and familiar aspects of AI utilized in the project. The more ambitious and harder to achieve aspects, representing the traditional limits of AI, are obscured, but are the real ground of both the project and the campaign.
The “Slogans” project is about the return of behaviourism and refining the means to manipulate individuals and groups at the neuronal level via natural language.
The “Slogans” project is less about “unlocking creative potential” and supporting “individuality” than about the return of behaviourism and refining the means to manipulate individuals and groups at the neuronal level via natural language as one of the primary interfaces between humans and their world. Whatever the medium, be it predictive text or bot poetry, a handwritten note or a tweet, readers and writers have sophisticated and traditional expectations about the written word, especially the finer aspects of syntactic, semantic, and sentiment encoding. This crop of AI is designed to crack into and appropriate this knowledge, such that human expertise is simultaneously put in service of and delimited by the machine.19 And, while “strict programming” and “the inverse fact of its lack of a structure for invention” for all forms of machine-generated text has been an ongoing problem,20 increasingly evolved algorithms and strategies for machine learning, combined with the exponential growth in accessible data to train on, is rapidly changing this situation. The “Slogans” project fits neatly into the current push with this generation of AI to link predictability, novelty, and sentiment more accurately and actively through data mining, and to use this historical data to formulate automatic responses to perceived environmental stimuli within specific limits.
The “Slogans” project is just one point in a larger argument being made by Google that there is nothing to fear in AI but fear itself.
Before moving on to the nitty-gritty of neurobiological research underpinning contemporary AI ambitions,21 it is important to outline some of the more outstanding elements of this Google campaign, which are instructive in terms of the social and cultural components of the digital industry’s AI sell-job. The “Slogans for the Class of 2030” project is just one point in a larger argument being made by Google, and other innovators in the field, that AI is a homely and human companion of human being, and that there is nothing to fear in AI but fear itself. To this end, Google’s advertising campaign, in which the project is embedded, is a mashup of persuasive strategies that harnesses enduring touchpoints of the scientific and technological evolutionary progress narrative. In the video-ad companion to the “Slogans” project alone, the range is impressive. In making its case for AI, the video ad refers to the human evolutionary predilection for interspecies relationships (in one shot, a fluffy white purse-dog, whom we adore, becomes one with a robot vacuum cleaner, which we find fun and useful); reminds us of our historic reliance on machines (no problem, we remain in control of these); clarifies popular misunderstandings about AI and associated techniques (stop panicking! AI is just another way of saying “machine learning”: humans learn, machines learn, learning is always a good thing); reassures us that the graduands of 2030 are in good hands with AI (Google’s guardianship of these children is preferable to ours; elders are superfluous in a landscape of AI-supported “individuals”). Analogy and elision are go-to modes: the creative potential of AI is the same as any tool (like a pen or a paintbrush); neural networks are neural networks—there is no need to make a distinction between organic ones and artificial ones (that’s a mere back-end concern); AI is simultaneously timesaving and leisure-increasing (we can offload simple and boring tasks, reserving the interesting and complex ones for ourselves); AI is a new-form superpower (a more forgivable version of the Übermensch is resurrected in the mind); finally, AI is a bringer of joy (not a harbinger of “zombie apocalypse” or “1984”).
Our brains are being sold back to us in a form imagined by Google.
In this reframing and rebranding exercise, our brains are being sold back to us in a form imagined by Google, and we are asked to keep our objections to ourselves in the face of this inevitability. We are asked not to notice the puzzling dissonance that seems to characterize the logic. Enlisting a “Creative” to demonstrate the important place of—if not absolute need for—ongoing human intervention in machine-learning processes and their outcomes is intended as reassurance of AI’s positive contribution to language, culture, and society, and extra insurance against human obsolescence. This message comes, however, at precisely the moment when Generative Adversarial Networks (GANs) are obliterating the need for human-supervised machine learning at all. AI is emphasized as proper to humans rather than machines, a sure case of the computer-brain analogy taken to extremes; at the same time, AI is de-emphasized as socially and culturally constructed, being presented as both apolitical and ahistorical. These arguments are given cohesion and coherence by way of a hundred-year old marketing trick—the promotion of the cult of youth over age—elided with the so-called “Planck Principle” in science. In and through this narrative we are asked to forget that Google is a commercial entity whose true metric of meaning for the Class of 2030 is the aggregate rate of AI adoption by coerced adaptation over consumers’ lifetimes.
Feeling A Certain Way about AI
The Google video ad for the “Slogans” project evokes four dichotomies at the core of sociocultural arguments about AI: 1) thinking and feeling; 2) age and youth; 3) false perceptions and true perceptions; 4) the past and the future. These dichotomies are compressed in the image of “Celeste,” the child to whom we are introduced at the beginning of the video. According to the narrator, “her generation will never experience a world without artificial intelligence.” The soothing yet ebullient voiceover goes on to ramp up the clash between generations by stating, matter of factly, that “a lot of us have grown up feeling a certain way about AI.” Supressing the conjunctive “but,” which might invite counter-argument or speculative commentary based in nuance, the voice declares with utterly disconnected and contrastive assurance the fait accompli that “Celeste and the class of 2030 will think about it totally differently.” Celeste—the celestial one—is, by birth, exempted from the inexorable faults of formative experience that we have suffered in the long causal chain from youth to age, because she and her fellow graduands “will grow with AI as a part of their daily lives.” She guides the true way. We, the aged, shrouded in the completed past, falsely “feel” in the old way; she, the eternal youth, inhabitant of the future, truly “thinks” in the new way. Douglas Coupland, representative of his own Generation X, enlisted by Google “to talk to Celeste’s generation,” does so as though in a time capsule, with AI acting as an ironic bridge between generations who have lost connection with one another, even as they occupy the same spatiotemporal domain. The older generation will marvel at the ability of this AI to reach Celeste in her futuristic state; the younger generation will smile at the well-intentioned quaintness of it all. Outside of the project, the positive effect of a general emphasis on these dichotomies can be found in everyday discourse, especially in relation to technology, youth, and education. Curious spatiotemporal compressions are revealed in casual turns of phrase:22 in defense of AI-powered digital technology in the classroom, a teacher today is able to say—with perfect seriousness—“these kids are growing up in a different time than we are,”23 without either twigging to the existential, temporal, and causal impossibility of the statement, or acknowledging the role of cultural conditioning and ideology that it admits. Such is the automatic response that purposeful and repeated evocation of these dichotomies elicits in every day life.
That Is No Country for Old Men24
In the name of human progress and evolution, we have been told that there should be no limits to AI; there can be no rational objection to it on the same basis. Our “feelings” about AI are to be put away, like fusty Victorian scientific romances and their doom-bearing authors. Old fuddy-duddies, with their sentimental concerns about science and technology, are irrelevant in the new and inevitable reality, the ever-expanding, futuristic now. Of course, this futuristic attitude, too, has a history, incubated in the analogue, proto-AI past. That the nineteenth century, especially its last quarter, should represent a watershed of spatiotemporal thinking in this mode is unsurprising, given the radical shift in industrializing nations away from dependence on limited, local human physical energy and lifecycles, towards other sources of seemingly unlimited, delocalized forms of energy driving durable and inexhaustible machines. In this scenario, humans became accustomed to their role as overseers of spatially and temporally extended, increasingly automated industrial processes and systems from top to bottom, as well as consumers of their products.25 As the motivation for even more profit and always larger markets for produced goods grew, so did the drive for more overall industrial efficiency and machine reliability. These trends accelerated the rate of technological change, and along with it came an increase in and generalization of scientific knowledge—and opinion. The great global age of the tie-in had begun.
In this atmosphere, investigators of the natural world, like Charles Lyell and Charles Darwin, developed their seminal views on geological and biological evolution, and younger scientific thinkers like Max Planck in physics and Nikola Tesla in electro-mechanical engineering incubated their revolutionary ideas. During this time, the relationship between science, the private corporation, commerce, and advertising coalesced in its basic form. Darwin was assisted in gaining approval for his more controversial ideas by a dedicated group of fellow researchers who, recognizing the obstructive social forces at play, set out to overthrow the old guard and convert the British scientific establishment to a new way of thinking about evolution.26 Tesla became a master at marketing his ideas to the public and winning corporate investors, stirring up new wonder, wants, and needs among the burgeoning “consumer class” in order to carry out his life of electrical invention.27 By 1900, Max Planck had innovated important aspects of quantum theory. He, like others before him, had also been deeply affected in his journey by the role of authority in shaping scientific knowledge. His autobiographical reflection on this theme became enshrined as a predominant social-scientific principle from the last half of the twentieth century: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”28 In an earlier iteration of this sentiment, he considers it proof “of the fact that the future lies with the youth.”29
Planck seems to have considered his assessment of the historical trajectory of theories in his field not just a fact, but a law of science innovation. But he ultimately recognized that the problem of knowledge development is not a simple question of biological lifecycles. His age-related bitterness derived from his experiences as a student, “who,” he says, “is convinced that he is in possession of an idea which is in fact superior, and who discovers that all the excellent arguments advanced by him are disregarded simply because his voice is not powerful enough to draw the attention of the scientific world,” unlike well-known figures, who are “simply above argument.”30 He concedes, rather, that methods of education and techniques of investigation, alongside unexpected juxtapositions of knowledge, are the true driving forces of scientific change, not mere biological turnover nor wholesale scrapping of older knowledge as a form of logical procession.31 For example, he points out that although “the atomic idea” was “extremely old,” it was the new way in which it “began to make itself felt” by “experimental investigation” that made the difference, leading to innovation in thinking on thermodynamics.32
Age is a convenient analogy for what is really an argument between systems and ways of knowing.
While there is no reason to oppose the observation that structural change and change in group dynamics will occur with the rise of new scientific paradigms—with the effect that the “older school” eventually recedes from view in favour of the “new”33—the paradigms themselves, despite the birth and death of human generations, are not inevitable. The conscious designation of older generations and multivalent expressions of human being and doing as superfluous by way of chronological succession, as though in an automatic march towards continual improvement, is not historically tenable and is, in fact, antithetical to genuine scientific and technological development. Age is a convenient analogy for what is really an argument between systems and ways of knowing. The pervasiveness of the idea, from the concept of the “digital native” to the belief that older adults are the exclusive victims of misinformation and conspiracy theories on the Internet due to age-related ignorance,34 works against critical understanding of multigenerational dynamics in learning and adaptive processes. The narrow construction of an in-group and an out-group in this circumscribed manner has the further effect of excluding ways of knowing that are not part of the Western tradition. From both diachronic and synchronic perspectives, only those in control of the dominant narrative in service of market creation and maintenance through actively directing the relational networks of individuals and groups are favoured. Today, the individual exists as an isolated, manipulable data point, increasingly divorced from socioculturally relevant traditional structures and supports.
Industry has seized on a simplistic “survival-of-the-fittest” evolutionary principle in relation to its harnessing of scientific and technological research innovation. Quite apart from the important recognition of the nuanced role of youth in the anthropological concept of cumulative culture, human learning, and evolution,35 the specific elevation of youth to the top of the social and scientific hierarchy edges towards a subtle form of applied eugenics. The heady mixture of the so-called Planck Principle and consumer advertising work together to create a narrative of inevitable scientific and technological progress in which only the adaptable, the adapted, and the new thrive. In this paradigm, objections are liquidated on the grounds of unfounded fears, lack of knowledge, superstition, the intransigence of tradition, and social superfluity imputed to age. “Youth” is simultaneously held up as a sign of innovation and represented as a fit for the corporate vision, rather than a sign of rebellion or resistance to it. To this end, the Google AI campaign resurrects the earlier industrial ideal of “youth” as “a language of control,” marked by the desired traits of “innocence and malleability.”36
Hooking into the human lifecycle entails a cultivation and exploitation of the figure of youth as material to be shaped or moulded such that other demographics follow by example.
In addition to hijacking the neurobiological substrate, hooking into the human lifecycle entails a cultivation and exploitation of the figure of youth as material to be shaped or moulded such that other demographics follow by example. Neil Postman argues that new industry in the last half of the twentieth century effectively scrapped the dominant metaphors for youth of Western modernity. Accordingly, children are no longer considered to be “blank tablets nor budding plants”; rather, as he starkly puts it, “they are markets.” In his view, common mass-media exposure and a general redistribution of purchasing power created an environment of equivalence for intellectual and social development, rendering the authority of elders irrelevant.37 But as Stuart Ewen contends in the now-classic Captains of Consciousness: Advertising and the Social Roots of Consumer Culture (1976), this dynamic turnover of authority from elders to youth was set in motion by industrialization itself, from which a further corollary was the purposeful repositioning of youth by corporations as leaders in social patterns of goods consumption. At the level of the family, parents were unseated as natural authority figures and replaced by corporations. In advertisements as in life, “emulation of the child … implied unabashed involvement in and commitment to prescribed patterns of consumption.”38 This emulation had its roots in the process of industrialization, when youthful physical and mental vigour became tied to advancements in technology, economic growth, and a shift in social authority. As Ewen explains, “youth had provided an ideal, for the transformation in production,” and the resultant “elevation of youth value within the culture had provided an ideological weapon against the traditional realms of indigenous authority as it had been exercised in the family and community in the periods before mass production.”39 Thus Max Horkheimer was able to declare in 1941: “now the rapidly changing society which passes its judgment upon the old is represented … by the child. The child … stands for reality.”40
Celeste and her fellow graduands are made to stand for reality in Google’s “Slogans for the class of 2030” ad. In this scenario, as in century-old advertisements, the “corporate authority” takes on the patriarchal and caregiver role. The message is that social elders, including parents, are not capable in the face of the new reality. This charge follows the general contours of Ewen’s observation that “youth serve[s] as a cover for new authorities”: in ads, children are seen to teach their parents; corporations reinforce the notion that adult contemporaries are “incompetent in coping with modernity,” exposing the generalized “breakdown in parental know-how.”41 This basic premise of early twentieth-century marketing encouraged a general shift in authority through social education: “adults were instructed to look toward youth for an ‘in-step’ understanding of what was right and proper to the new age.”42 As Ewen observes, “where ads were not written directly for the young, they often spoke in the name of the young against parental attitudes.”43 This ironic opening for the pseudo-liberation of the young from the grips of the old implies that “the needs of the child [are] better understood by industry.”44 Conjoined to the well-worn narratives of technological and evolutionary progress, this century-old advertising strategy has been transformed into a pop-scientific law of human socialization: youth lead and the rest follow—but only after their authority has been progressively dismantled by the twin forces of age and industry advancement under the patriarchal corporate aegis.
Saving Celeste
The bottom line revealed by Google’s campaign is that AI is no longer a back-end concern nor a holy-grail quest for a select group of scientific dreamers.45 And contrary to the version of history that Google would have us believe, AI has been a part of our daily lives for decades. It is functional, pragmatic, and everywhere. By way of a century-old marketing technique, we are purposefully distracted from the irony that Celeste-the-child is being consciously pre-programmed at the level of the brain for inclusion in AI systems devised by an intergenerational coterie of elites, who are not tomorrow’s youth, but today’s adults—and irresponsible ones, at that. These adults are not only in the process of capturing a market, they are wiring brains and bodies in a specific way while at their most susceptible. Should we not be alarmed by the public declaration that these profit-driven corporate adults already have our children in thrall from the beginning of their lifecycles? By the exhortation to give up and give in? Should we not be even more alarmed that we are being asked to forget that it is not AI, broadly speaking, that is necessarily the problem, but the specific form of AI that Google and others are selling that is the problem? Put even more cynically, our reservations and objections are meaningless, because, in Google’s view, Celeste is already a lost cause. Particularly the bits of Celeste’s brain that would otherwise be taken up creating the necessary biological neural networks to effect things like empathy, emotional control, idea-generation, language, memory, decision-making, and pattern recognition—without machine assistance or control. Do we not have a moral and ethical obligation to save Celeste from this fate?
Celeste-the-child is being consciously pre-programmed at the level of the brain for inclusion in AI systems devised by an intergenerational coterie of elites who are not tomorrow’s youth but today’s adults.
A more generous view of the situation turns such negatives into resolute positives in the name of art and creativity, like the “Slogans” project. According to this view, so-called “creative bots” resist slippage into the realm of ethical concern because of their forward-leaning, generally benign potentials, even in the context of larger, AI-powered environments, like the Internet or social media platforms. In support of this perspective, Madeline Lamo and Ryan Calo observe that
while creative bots may create genuine confusion and even chaos, they typically represent a harmless, imaginative format that provides artists, researchers, and others with a new tool for expression and inquiry. The fact of automation permits the botmaker to achieve an audience reach and creative scale that might be hard to accomplish otherwise. Importantly, some bots achieve their programmers’ artistic or research-driven aims best when users either believe the account is human-run or cannot tell whether an account is automated. The very ambiguity around whether the interaction constitutes genuine interpersonal connection, overt deception, or something else, generates new possibilities for storytelling and data collection.46
The excitement of the literary academic over the possibilities for “critical reflection” via creative bots—derived from the integration of “the generative operations” of “generative Twitter bots” and “generative process” with “generative expression” in “generative writing”—provides a route of highbrow escape from humbug attitudes: pushing the boundaries of “machinic modes of expression,” these bots reveal “the software itself to be an active agent in the reading encounter,” such that “functional and thematic attributes mediate self-reflexively upon their operational contexts within digital infrastructure” in a post-literate world.47 Clearly, Celeste has her work cut out for her as artist, critic, and all-round prosumer participant-dupe.
Traditionally authored texts are carriers of wisdom about the language-brain-environment nexus.
Aside from their role in providing linguistic training data for machine learning, traditionally authored texts, regardless of modality, are carriers of wisdom about the language-brain-environment nexus. They provide insight into the inner-workings and patterns of this nexus that the current generation of AI risks. Reading between the lines, a drama of relationship is played out in the interweavings of these cultural productions. Although certainly forms of “action” within a “plot,” these instances of human revelation are not translatable to the usual consequential and material focus on what characters do in the course of a story; moreover, they are embedded in the wider fabric of human community across time and space. Whereas AI relies on reduction of language to mathematical principles of predictability and unpredictability, the novelty produced by human authors comes from creative self-expression in language based on interconnections fuelled by memory, a combination of embodied experience and depth of knowledge.48 Autogenerated and predictive language production is the AI equivalent to being permanently trapped within an inescapable cybernetic loop in which the imaginative life and linguistic self-expression unfold as formulaic repetition according to machine limits, not requiring involvement of the biological element.
The offloaded techniques of learning that AI researchers and developers are pursuing for machines are, in fact, at the very foundations of the higher-level creativity that Google posits will be the final frontier of human endeavour. The argument is teleological, in the sense that it does not account for conditions of non-development or lack of development of these aspects of cognition in humans themselves, and ironic, in the sense that a corollary of intensive AI is the removal of the necessity for these aspects of cognition from the human environment. Far from conjecturing their results on the basis of a developing child’s brain, Google’s assertions (and those of well-meaning critics) only apply as an argument of very specific augmentation in relation to the creative powers of the old-school, fully baked, adult brain we already know, whose neural networks and cognitive strategies still have a chance of surviving, or at least mitigating, the latent destructive powers of a new AI environment. For those brains, what happens next, when we are in the position of “talking to ourselves” in the context of having become our “own app”? What of youthful brains? Who will be listening in? What happens when we are all homogenized clichés of ourselves? Whose powers of self-reflexivity will matter most? For the AI community, the statement attributed to Richard Feynman, “What I cannot create, I do not understand,” is a foundational principle; in machine-learning terms, however, creation is actually a mode of replication and understanding is actually a mode of construal.
Behind the “Slogans” project’s otherwise insipid readings of the human lifeworld, a high-stakes game is being played. Beyond “hearts and minds” ideas about propaganda, virality, and fake news, it is the various modes of hacking the sensorium through machine learning techniques and AI platforms that corporations like Google want to refine and amplify in order to hook into individuals like never before. Spontaneity, inventiveness, timeliness, congruity: these are the targeted qualities that will carry the newer forms of AI forward, blurring the distinction between our own thoughts, feelings, voices, even actions, and those of the machine.
Citation:
Reid, Jennifer. “Generation AI Part 1: The Other Environmental Crisis.” Winnsox, vol. 3 (2022).
ISSN 2563-2221