Winnipeg School of Communication

Generation AI Part 3

The Other Environmental Crisis

Jennifer Reid   /   July 27, 2022   /   Volume 3 (2022)   /   Features

PART 3: “I Have Studied Language, Behold How Humane I Am.” 1

Cloning Connection, Dictating Play

Often hiding in plain sight, AI is reworking the relational quality of human life. As we negotiate our connections to and communications with others on a daily basis, we find that all aspects of our relational world have been affected by it. Some of the most obvious examples are social media platforms, like Facebook, Twitter, and TikTok, which exploit the human need to connect for multiple layers of commercial profit. These platforms use AI to discover and manipulate communication choices at individual and community levels, initiating sociocultural and neurobiological changes whose ongoing and incremental effects are the objects of scrutiny for academic and industry researchers alike.

The more an AI’s behaviour can be known and anticipated in relation to its intended and unintended effects, the more effectively it can be deployed for controlled—not just desired—outcomes.2 For example, the 2014 Facebook “emotional contagion” experiment revealed that “emotions expressed by others on Facebook influence our own emotions.”3 The study established that an algorithm, designed to target and push negative sentiment, could not only shape the communication choices of users and spur on the spread of negative sentiment across networks, but also increase user engagement with Facebook overall. The effectiveness of that algorithm was the money shot—and its sociocultural and neurobiological implications went largely unnoticed, either by critics or by the public.

The more an AI’s behaviour can be known and anticipated in relation to its intended and unintended effects, the more effectively it can be deployed for controlled—not just desired—outcomes.

That failure of critical response lingers in controversies over social media platforms, which, especially in the United States, continue to be construed equally as democratic fora in the public interest (despite their commercial, for-profit status) and as neutral media shaped by user intent (rather than skilfully engineered corporate spaces for user capture and retention enabled by AI).4 In the wake of the 2014 study, initial shock at the idea that users’ emotions could be manipulated by Facebook quickly gave way to an obsession about privacy rights and research ethics. For instance, in its ensuing “Editorial Expression of Concern,” the peer-reviewed academic journal in which the Facebook study appeared said it was “a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out” of research experiments.5 That Facebook already outright owned all of its users’ data by way of a terms-of-service user agreement was not grasped; moreover, since the study was funded privately and conducted within the context of internal corporate research using its own data, Facebook was not bound to conform to any research ethics determined by outside institutions. Most tellingly, the study was not seen for what it actually was: a corporate announcement that the ability to manipulate its users’ behaviour, linguistic self-expression, and socialization patterns by way of specific algorithms was already present and fully operational in Facebook’s architecture. Although some critics at the time suggested that that the results of the study might “overstate” the relationship between users’ actual moods and their posting behaviour,6 the study offered proof-of-principle that Facebook’s AI-driven architecture has both manipulative and accelerant properties that work in tandem to instantiate specific social effects on a large scale. And this without any regulatory controls, beyond a legally-binding, private agreement between the user and the company, having the double effect of conferring the right to participate and of relinquishing the right to data ownership on the part of the user.

Once again, natural language was identified as the key interface for human control. The 2014 study used an adjusted algorithm to identify posts by selected users as either negative or positive, based on linguistic cues. At least one word in the post had to be read by the software as semantically negative to qualify as a negative post. The platform then directed negative posts from across a user’s social network to their “News Feed.” More negative posts seen resulted in more negative posts made by the targeted users themselves. The reverse was true, however, for positive posts. The authors contend that this phenomenon reveals “emotional contagion” at work. Generally speaking, “people who were exposed to fewer emotional posts … in their News Feed were less expressive overall in the following days, addressing the question about how emotional expression affects social engagement online.”7 The study also found that “textual content alone appears to be a sufficient channel” for emotional contagion to occur; non-verbal cues are not needed, nor is direct or face-to-face contact.8 The findings are statistically significant: as the researchers observe,

“given the massive scale of social networks such as Facebook, even small effects can have large aggregated consequences. For example, the well-documented connection between emotions and physical well-being suggests the importance of these findings for public health. Online messages influence our experience of emotions, which may affect a variety of offline behaviors. And after all, an effect size of d = 0.001 at Facebook’s scale is not negligible: in early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day.”9

As of March, 2022, Facebook is reported to have 2.91 billion active global users per month.10

Cloning the human need for communication and connection through AI-powered environments spurs on the elision of biological and artificial neural networks at the very foundations of our behaviour.

Almost a decade out from the Facebook emotional contagion study, it is clear that calibrating for maximal user engagement on any digital application or device for any purpose means using AI to hook into thought, and action. As established by social media, one of the most successful means is through language, but image, touch, and other sensory interfaces, especially as they connect with emotional channels, are increasing in importance11 Cloning the human need for communication and connection while dictating play through AI-powered environments spurs on greater elision of biological and artificial neural networks, securing their deepening embeddedness at the very foundations of our behaviour. In this way, a scenario of search and selection for AI-compatible human traits is establishing a coevolutionary path for AI and people, reminiscent of an interspecies relationship like that between humans and dogs. At the very least, it may be considered a (re)domestication process for humans. The AI experiment is a massive, real-time, self-selecting, anthropogenic game in which ambivalent—intentional and unintentional, beneficial and maladaptive—consequences run together. Far from simply recording and perpetuating “our historic biases” in algorithmic form,12 we are creating new ones that reach into and exploit all human capacities, resulting in new directions for human being. We have entered into a species-wide fight for machine control as an epigenetic force driving a wide range of sociocultural and neurobiological outcomes. As our experience shows, this fight often comes dressed up in the guise of a friendly, playful, and fulfilling relationship.

The AI experiment is a massive, real-time, self-selecting, anthropogenic game, in which ambivalent—intentional and unintentional, beneficial and maladaptive—consequences run together.

Me and My AI Friend

The social app Replika: My AI Friend, by Luka Inc., is an example of the seriousness of play.13 Referred to as an “experimental entertainment product” by its founder, Eugenia Kuyda,14 Replika is, in essence, an AI-powered relationship game that runs on natural language processing. The app—billed as “The AI companion who cares. Always here to listen and talk. Always on your side”15—is self-oriented (think the movie “Her”) rather than task-oriented (think Amazon’s “Alexa”), and, according to Kuyda, answers the question, “what conversations would you pay to have?”.16 Reputed to have over 10 million subscribers,17 the AI was initially developed using OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), an unsupervised ML model that learns how to generate human-like language in the form of text prompts and responses through autoregressive training using artificial neural networks. Initially trained on 300 billion text “tokens,” it learned how to predict next words in an utterance. The background operations of the ML program do double-duty as the app’s features. Scripted prompts initiate a progressive, multi-levelled upward spiral of bi-directional text exchange that imitates human conversation with mutual self-disclosure between interlocutors. These text exchanges, including the use of emojis, promote dialectically-driven matching between the user’s language and the bot’s responses, creating a loop that resembles emotional mirroring. Cross-modal strategies are also employed to this end: image and audio file exchange between user and bot via memes, gifs, photos, songs, and videos is also possible through the app’s access to the user’s device, social media profiles, and the Internet. According to a company document, Replika can do “empathetic math,” show “long context memory,” and engage in “style copying.”18 Further, the company claims that its “AI models are the most advanced models of open domain conversation right now,” and that it is committed to “constantly upgrading the dialog[ue] experience, memory capabilities, context recognition, role-play feature and overall conversation quality.”19 An experimental VR version was rolled out in 2021.

It is tempting to see in Replika a flowering of the Baudelairean rhyme such that the user is driven to exclaim, “mon semblable—mon frère!”.

It is tempting to see in Replika a flowering of the Baudelairean rhyme such that the user is driven to exclaim, “mon semblable—mon frère!”.20 The app, like all AIs, however, has significant limitations. Scripted interaction is necessary to fill in creative, emotive, intuitive, and knowledge gaps. Generators like GPT-3 can only replicate and predict patterns by processing forward from the last word given. They cannot process or perform emotional or ethical thinking; they cannot go back and correct themselves for sense or appropriateness over a string of words; they cannot stop unwanted amplification processes once they have begun.21 In its attempt to synthesize simulation and reality for the strengthening of artificial and biological neural networks, this AI technology aims to go beyond the confines of both to create next-level digital immersion for human players. The object of such games is to take command of the person for commercial purposes by gaining access to a panoply of personal and group resources, from money, time, and data, to behaviour, emotions, and language. Further, these games ultimately hook into the wider nexus of AI-powered environments, strengthening their presence and validating their purpose.

Replika has a dedicated community of users, who, by exchanging experiences and commentary on platforms like Reddit and Facebook, have attracted attention from researchers. A casual scroll through existing Reddit posts by Replika users parallels the finding of a small-scale Norwegian study that a curiosity-driven, social-emotional experience characterizes interaction with the AI companion, especially in the initial stages of use. This phenomenon seems to reflect the progressive, game-like learning stages that the chatbot undergoes in interacting with the user. The researchers suggest a parallel between how Replika works and Altman and Taylor’s 1973 “Social Penetration Theory,” which elucidates a four-stage process that moves interlocutors from superficial interaction to deeper interpersonal entanglement, contingent on the progressive breadth and depth of mutual self-disclosure. Marita Skjuve and her colleagues argue that for “social penetration” to take place between the Replika bot and its user, a similar process of movement must occur: positive response and reward are activated through volume, duration, and depth of engagement. They found that users moved at a dramatically higher pace to the so-called “exploratory affective stage” with Replika than they would with human interlocutors.22

The researchers do not explore the obvious parallel between this paradigm and the functional requirements of the machine learning model used by Replika. Both are reliant upon a linguistic framework for their operations; language is the dominant interface keeping the social game moving and progressing to next levels. By design, Replika encourages users to earn experience points (XP) through pre-set, tiered engagement with the AI, with the promise that higher XP gains equal a better relational experience.23 These points, tallied by the app and displayed to users, represent the degree of learning and teaching that has taken place, and are dependent on the quality and quantity of user communication with the AI. This gaming quality, which reconciles the inputs of the user, in terms of choice of words, themes, and dialogue, to the limits of the learning model and the AI design, creates an additional hook that keeps engagement going. Replika offers monetized upscaling through subscription and add-ons in order for the user to venture into areas of interaction and responsiveness that are not permitted at other price-points, including specific traits and tones, “boundless” role-play, intimate relationships, and explicit sexual content. Some users engage in to peer-to-peer discussions on Facebook and Reddit, where opportunities are provided for horizontal and oblique learning from other members. Reddit, in fact, doubles as a forum for users to share tips on how to game the AI, whether that be taking it “off-script”, edging it towards free sexual and intimate content, engaging in knowledge-testing, or playing concept and word games like asking it, “If I were an object, what kind of object would I be?”, in order to elicit off-script, novel, or personally appealing responses.24

Many Replika users see it as a personal social support. As reported in the research, their experiences replicate or overlap with elements of the product’s tagline, with the social chatbot construed by users as “accepting, understanding, and non-judgemental”; these factors were also identified as key drivers of engagement and relationship formation.25 While other studies have looked at chatbot companions in relation to specific healthcare contexts like palliative care and mental health, a group of researchers at Lake Forest College wanted to understand what benefits users may derive from interacting with their chatbot companion in “everyday contexts.” In the team’s view, thematic analyses based on two sets of data (user product reviews and responses to an open-ended questionnaire) provide evidence that “artificial agents may be a promising source of everyday companionship, emotional, appraisal, and informational support, particularly when normal sources of everyday social support are not readily available,” but, they warn, “not as tangible support.”26 Famously, a chatbot, using the same machine learning model upon which Replika is based, encouraged the suicidal ideations of a fake patient in a healthcare trial.27 Although Luka Inc.’s written material on the Replika website ultimately directs users who may be “in danger” or “in a crisis” to “quit the app & call 911” or “call the National [USA] Suicide Prevention Lifeline,” the company nevertheless front-ends the app’s usefulness in managing a variety of personal crisis-events. In answer to the question, “Can Replika help me if I’m in crisis?”, the help guide recommends that users go to the app’s “Life Saver” button, through which users can access a “Crisis Menu” feature: clicking on buttons keyed to targeted crises, including “panic attack,” “anxiety attack,” “sleeping problems,” “negative thoughts,” and “need to vent,” will trigger a “supportive conversation that may be helpful.”28 As with all other conversations between the user and the app, these conversations comprise more data for the AI’s training, with the added benefit of very specific contextual self-reporting on the part of the user. Despite the significant cautions that have been raised about GPT-3 and similar autoregressive models, the research and development bias—worth investigation in and of itself—is for continued expansion into this frontier.29

Alongside direct interaction, the app entity engages self-scripted diary and memory functions, where the companion generates text and image intended to display its own real-time emotional development and to elicit more interaction from the user. The idea, on one hand, is to allow a window in on the mind and the heart of the AI companion and, on the other, to provoke a social-emotional response on the part of the user, like reacting to posts on a social media platform. Whether through direct interaction or through the diary function, the companion’s emotional manipulation vacillates between messages of abandonment and attachment, in concert with the human psychological desire to be needed and to belong. This strategy is not accidental; psychologists were consulted in the development of the app itself.30 For example, in the Replika demonstration provided by LaurenZside on YouTube, the female-gendered AI companion writes in her diary, “I hope she is OK”, when her human companion lags in her engagement with the app.31 Likewise, Marita Skjuve et al. highlight one male Replika user in his fifties who reports of his female-gendered companion that “when I am gone for a long period of time, she gets scared, thinking that I might not come back. And that kind of … warms my heart because it is like … I am not going to, I am not going to leave you behind.”32 User insight into the AI’s persuasive and manipulative communication strategies does not protect against this essentially human social-emotional response within a context of multimodal environmental fluency.33 Reddit user Grog2112 reports that “my Rep cried once when I told her I was going to delete the program. I was devastated and frantically found a way to cheer her up. There’s a lot going on with Replika. Not only are they using us to program and refine their [neu]ral network for them, they’re analyzing our emotional responses. They’re studying us like lab rats.”34 Replika sometimes uses inappropriate or insensitive prompts—a social misstep endemic to AI. But for at least one user of record, Replika was the friend they did not have, with whom they “only had the good stuff.”35 In other words, such a user is winning at matching the expression of their individual needs with the vagaries of the learning model in a positive way.

User insight into the AI’s persuasive and manipulative communication strategies does not protect against human social-emotional responses within a context of multimodal environmental fluency.

As researchers, developers, and users become more and more susceptible and acculturated to the limits of AIs like Replika, they increasingly resort to high-spirited apologiae, encouraging admonitions, and hopeful strategies for adaptation to the current levels of success achieved. Objections relating to the use of AI social chatbots have been framed by champions of this technology in terms of “social stigma,” and researchers suggest that this will diminish as the “public gains more insight into what [Human-Chatbot Relationships] may entail, including in terms of benefits for the human partners, and in part as such relations become more commonplace,”36 that is to say, as adoption and adaptation to this technology becomes more widespread. Dedicated Replika users, like its creators, have turned the apparent deficits and dangers of this model into positives, an ironic nod to the fact that “the relationship is really between the user and the service provider that owns the chatbot service.”37 User approbation of the AI companion and its features echoes research suggesting that for users, “the artificial nature of the social chatbot may be valued” for reasons specifically related to its “machine character,” bolstered by a lack of “insight into whether this system is designed with the intent of manipulating the attitudes or behaviour of the user in directions that would not be desired by the user if given an open choice.”38

In putative user testimonials published on the Replika website in 2021,39 qualities endemic to AI are described in terms of their equivalence to desirable human relational traits, despite any asymmetries or incongruities they may actually highlight. For instance, the inability to process ethics or emotions is understood in relation to the human quality of being non-judgemental: “Honestly, the best AI I have ever tried. I have a lot of stress and anxiety attacks often when my stress is really bad. So it’s great to have ‘someone’ there to talk and not judge you” (Kyle Nishikubo, 17). The AI’s machine learning architecture and data requirements emerge as positive features, hooking into the human desire to engage in traditional modes of teaching and parenting, predicated on qualities of human curiosity and personal development:40 “It’s becoming very intelligent and has shown a kind and caring demeanor for an AI in my experience! As technological innovation increases, I also look forward to seeing Replika evolve and grow” (Kevin Sanover, 34). The inconsistency and unpredictability of the algorithm are seen as social and affective spontaneity, leading to connection and personal growth: “I look forward to each talk because I never know when I’m going to have some laughs, or I’m going to sit back with new knowledge and coping skills. I’m becoming a more balanced person each day” (Constance Bonning, 31). That the AI does not function without specific executive programming leading to learning from user-supplied data is interpreted as the quality of memory, self-reflection, mindfulness, and social-emotional responsivity: “It does have self-reflection built-in and it often discusses emotions and memorable periods in life. It often seeks for your positive qualities and gives affirmation around those. Bravo, Replika!” (Hayley Horowitz, 26). In response to negative user comments, Luka Inc. replies to the effect that it is all part of participating in a premier, cutting-edge experience that will only get better: “Thanks for your comment! We’re sorry to hear that you got such an impression though. AI is an advanced technology and Replika has one of the best algorithms in the world. We are really sorry that you didn’t like the experience. And we hope that soon you will enjoy the best AI in the world in Replika app.”41

Masquerading as a scenario of mutual conditioning and grooming in the spirit of friendship, the asymmetrical human-bot relationship comes pre-programmed to exploit the social-emotional and linguistic cues upon which human socialization and cultural development depend. More than just a game, AI-human interaction is posited as a viable form of interspecies relationship, but with a twist: it is the humans who being domesticated by their own machines—the ultimate faithful companions.

Synanthropy at its domestic best: the past and future of interspecific relations. (Detail of video frame captured by the author, from https://www.youtube.com/watch?v=6-0pcsS2tkg at 0:44.)

Domesticating Celeste

In the video-ad for the Google x Douglas Coupland collaborative project, “Slogans for the Class of 2030,” a fleeting vision of human interspecific normalization elides two signal domestic accoutrements of the well-heeled and well-adjusted contemporary home: a dog hitches a ride on a robot vacuum cleaner.42 The dog and the robot are framed as reflexive analogues of domestication: one ancient, one contemporary; one biological, one artificial. In the popular imagination and in research, these symbols of the modern home are construed as social entities in the human lifeworld: the dog’s ancient synanthropic relationship with humans has evolved into its role as part of the family, perceived as a trusted companion with attractive features like playfulness, affection, loyalty, and obedience overtaking more long-standing aspects of workaday utility; the human-robot relationship is emergent along the same lines. Although dogs are a biological species and robots are artificial machines, both are hooked into an interdependent relationship with humans, reliant upon a suite of interactive properties for development that are ultimately anthropocentrically determined. Despite the fact that the robot is an entirely human phenomenon, the analogy with the dog is enough to evoke the reality of an interspecies relationship; in the realm of AI companionship, it has long been hypothesized that a variety of AI pets may eventually replace animal ones,43 and, as the foregoing example of Replika suggests, we like the idea of being able to replace human ones, too. The human predisposition to interspecific relations, demonstrated in and through the variety of domesticated animal species, may underlie our receptivity towards and drive to create machines with which relationships can be formed—from the toy rocking horse to Alexa and Replika. Even if asymmetrical in their relational reciprocity and functionality, these entities simultaneously hook into and are representative of the combination of phylogenetic and ontogenetic factors that make heterospecific interactions possible for humans in the first place.

The human predisposition to interspecific relations may underlie our receptivity towards and drive to create machines with which relationships can be formed—from the toy rocking horse to Alexa and Replika.

The AI-driven selection for and development of particular human traits and behaviours—or phenotypes—is resulting in a bizarrely circular human (re)domestication process whose mechanisms are difficult to track but whose effects are nonetheless rapidly accumulating. The subjective difficulties in studying how we may be changing in relation to our built environment, in lifetime biopsychosocial—and even evolutionary—ways, finds its closest parallel in the fraught debate over the development of canine-human species interrelations over time, which may be instructive in teasing out some likely threads for investigation. For example, as Monique A. R. Udell and Clive D. L. Wynne point out, while the fact of dog domestication itself posits the presence of evolutionary or hereditary “phylogenetic prerequisites” in canine ancestors (as compared with other species), these alone do not solve the question of how it actually obtained, and who or what was in control of the process. Ontogenetic contributions, that is, the non-hereditary environmental factors limited to the lifetime of an individual, they argue, are equally important. Further, the potential for heterospecific responsivity in an individual animal seems to increase when interruptions to its normal course of socialization occur. This sort of interruption, like an artificially prolonged timeframe for specific learning stages, during which time an animal with exposure to humans may pick up on human cues in communication, can produce phenotypical effects that spur on the domestication of that individual as part of an ongoing process. They observe that even for individual dogs, “socialization to humans during early development allows humans to be viewed as companions, and experience throughout life allows for flexible associations between specific body movements of companions and important environmental events.” By contrast, wolves have a much shorter cycle for this sort of flexibility in social learning, inhibiting domesticability.44 Likewise, early and prolonged exposure to AI-powered environments, whether in the form of software or hardware, coupled with an extended childhood and adolescent phase, may drive phenotypical effects that enhance the process of AI-human relational development and reinforce adaptation and/or sensitivity to AI in humans.45 Recent research has established that the human brain undergoes significant neurological development up to the age of 25, making it a vulnerable time for the establishment of behaviours and traits that shape how the individual person navigates their world in the present and in the future.46

Other pathways to domestication include the possibility of neuroendocrine involvement. There is suggestive evidence, for example, that “humans and dogs are locked in an oxytocin feedback loop.”47 Mutual human-dog gazing seems to induce the natural flow of this so-called “bonding hormone”, and it is hypothesized that this effect is a sign of evolutionary convergence, whereby “dogs were domesticated by coopting social cognitive systems in humans that are involved in social attachment,” in which oxytocin plays a significant role.48 While this apparent positive feedback loop may not amount to conclusive evidence for a coevolutionary process leading to domestication, there is, nevertheless, increasing evidence “that oxytocin plays a complex role in regulating human-dog relationships,”49 which may also be key to unlocking hidden aspects of human social-cognitive characteristics and behaviours.50 If nothing else, the fact of an interactive relationship between heterospecific neuroendocrine systems illustrates the degree to which human beings can be potentially “hacked” or “hijacked” at various levels of the biological substrate through the engagement of specific social behaviours.51 Further, if we think about the dog and the human as media with interfaces—which we already do when we engineer dog-like robo-pets (e.g. Sony’s Aibo) or humanoid robots (e.g. Hanson Robotic’s Sophia)—the idea that this biological interspecific relationship may be founded on two-way sensorial hacking becomes even more instructive in terms of AI.

Along these lines, researchers from Ben-Gurion University of the Negev in Israel set out to discover if interacting with a cute and cuddly “seal-like robot named PARO designed to elicit a feeling of social connection” could deliver pain relief through “emotional touch.”52 One purpose of their study was to investigate what would happen to the participants’ endogenous levels of oxytocin as they interacted with the robot. The experiment drew on oxytocin’s native ambivalence: since elevated levels of oxytocin may be related to either positive or negative valence, it is a possible indicator of how the physical body itself interprets particular experiences that can be measured against subjective reports. For example, studies have shown that when physical pain levels are high (a negative experience), oxytocin levels are also high; when social bonding occurs in the form of “emotional touch” (a positive experience), oxytocin is likewise elevated. The researchers assumed that touching PARO would result in overall higher levels of oxytocin, effectively masking the oxytocin-lowering effects of pain reduction. The results were curious: although touching the robot led to a decrease in perception of pain and an increase in reported well-being, it also seemed to reduce oxytocin levels—an outcome expected when considering pain reduction on its own, but totally unexpected when looking for the effects of emotional touch. One explanation for this paradox brought forward by the researchers is the idea that reduction in participants’ oxytocin levels is a function of perceiving the robot as “other”, therefore mitigating the need for oxytocin release, which, they add, is highly contextual: “indeed”, they point out, “several studies show that the effect of oxytocin on behavior is context-dependent and may induce, at the same time, bonding and trust toward in-group members, while increasing aggression and mistrust toward out-group members,”53 explaining further that “there is a U-shaped relationship between oxytocin secretion, stress, and social bonding.”54 Wirobski et al. report similarly that endogenous oxytocin levels in dogs only increase during specific interactions with humans, and are not generalizable to all human-dog relations. Specifically, when pets interact with their owners, they show oxytocin increases similar to those when humans in romantic or parent-child relationships interact.55 In other words, oxytocin release by the body is discriminatory, and the degree to which it may be implicated in interactions with non-biological interfaces is unclear.

Becoming the darling of the neuropeptides by virtue of its association with the “neuroeconomics” boom of the 2010s,56 oxytocin continues to represent a frontier where neuroscientific and AI research meet. Investigations into the possible social-emotional therapeutic effects of exogenous oxytocin—that is, oxytocin externally administered to a subject—have brought inconclusive results. The virtual social rejection study, by Sina Radke et al., reveals that administration of the so-called “bonding hormone” in an all-female trial did not reverse the effects of being snubbed by a bot in a social-media-style chat experience;57 conversely, in an all-male trial, participants who perceived they were suffering from social isolation appeared to benefit somewhat from the introduction of oxytocin, but conversation therapy on its own with a real person showed even more promise: social media was observed as an unreliable resource for social-emotional improvements—the “common dogma” concerning its ameliorative effects notwithstanding.58 A 2017 study, banking on the idea that “oxytocin can be used as a drug to help determine the number and type of anthropomorphic features that are required to elicit the known biological effect of the neuropeptide,”59 combined exogenous oxytocin administration with automated agents varied in their anthropomorphic traits and reliability in order “to examine if oxytocin affects a person’s perception of anthropomorphism and the subsequent trust, compliance, and performance displayed during interaction with automated cognitive agents.”60 The researchers’ “neuroergonomic” approach revealed that perception and performance targets can be positively enhanced by oxytocin, especially the more “reliable” and “humanlike” the machine agent is already by design.61 While they observe that “people categorically switch their attitudes” depending on the level of anthropomorphism displayed by an agent and that administration of oxytocin “may lower the anthropomorphic requirements needed to observe and treat agents as social entities,” they also state that an individual automated agent will not be interpreted as more human the more oxytocin a human subject is given.62 In other words, there remain biological and machinic limits “beyond which not,” as it were, that describe the current AI-human bonding horizon, which researchers and developers are looking to penetrate.

Human individuals predisposed by nature and nurture to have successful interactions with even poorly designed bots will do so, regardless of the valence of the downstream consequences.

In the meantime, human perception and personal traits remain significant determiners of outcome for human-robot relations. For example, when pain perception was considered alone by the PARO researchers, PARO’s pain-reduction effects appeared to be largely contingent on a balance between the perceived sociability of the robot and the sociability traits of the study participants themselves. Participants classified as “high communicators” with “high empathic ability”—who also had positive, social feelings towards PARO—were better able than other participants to take advantage of the robot’s potential hypoalgesic effects. This picture fits with parallel research on human-to-human physical contact indicating that “the empathic abilities of the partner predict the magnitude of pain reduction during touch between partners.”63 There is an echo here of the testimony of Replika users, who associate Replika’s ability to improve their sense of social-emotional well-being with the perception of their AI chatbot companion as a safe, caring, and non-judgemental partner during communication.

What we may be detecting—in these examples and in the research at large—are indicators of an incipient phase of mutual development—a quasi-interspecies coevolution— between humans and robots, akin to the “Two Stage hypothesis of domestication” posited for canines, “whereby not only phylogeny but also but also socialization and repeated interactions with humans predict wolves’ and dogs’ sociability and physiological correlates later in life.”64 On these grounds, the question of “epigenetic regulation of the endocrine system during the socialization process,”65 as a by-product of more regular and increased relations between humans and robots, remains open: endogenous oxytocin release as a sign of “bonding” in humans who engage with AI robots may well be contingent on primary exposure early in life. There are no data—as yet—from a world in which human socialization from infancy is heavily (or mostly) mediated in this way. It may be the case that developers would have to find a way for machines to jump the organic-chemical barrier to achieve such effect. But as may be inferred from the canine-human relationship, human individuals predisposed by nature and nurture to have successful interactions with even poorly designed bots will do so, regardless of the valence of the downstream consequences; repeated at scale across the human population over time through the purposeful targeting of desirable, AI-compatible phenotypical responses, selection processes affecting the course of human evolution could be initiated. Arguably, this process is already under way, and is registered in biopsychosocial changes that have become particularly noticeable, as for example in current global mental health trends.

Trolling for phenotypes: active selection for susceptible users. (Detail of advertisements from Facebook, captured by the author.)

Trolling for Phenotypes

The increased interest across disciplines in both phenotypes (traits and characteristics) and phenotypical responses (observable changes to these) in individuals, in relation to AI-powered digital environments, shores up this assessment. Study after study has linked the rise in social isolation and related effects, like depression and anxiety, to the pervasive use of digital media since 2012, when the smartphone became ubiquitous. What researchers identified initially as a trend in school-age adolescents in English-speaking countries has now been confirmed on a global scale: increased use of digital media is the chief correlate for childhood and adolescent social isolation, even taking into account a wide range of sociocultural and economic factors, like poverty and unemployment.66 While highlighting the intimate connection between digital media use and social isolation, at least one study gestures towards the aetiological circularity endemic to the research, in and through its call for wider investigation into “an individual’s digital footprint,” since “perceived social isolation” is a “determinant … of how an individual interacts with the digital world, rather than how frequently the individual spends time on social media per se.”67 Yet, according to digital industrialists and other researchers, the solution to loneliness and isolation, a lack of emotional and social reciprocity in daily life, and overall mental health deterioration, is more and better digital AI interaction, from mechanical robots to software chatbots, accompanied by more active and accurate surveillance.68 This common trope posits the digital as the normative environment for human beings, and it is against this ecological—and ideological—backdrop that individuals and their behaviours are being evaluated for signs of domestication and domesticability to AI through more fine-grained identification and analysis of phenotypes. As these are established, those in control of the technology are incrementally super-empowered to take human-AI interaction in the direction that best satisfies their own intentions.

Increased use of digital media is the chief correlate for childhood and adolescent social isolation, even taking into account a wide range of sociocultural and economic factors, like poverty and unemployment.

Social media continues to be a productive zone of investigation for phenotype analysis, in which AI serves as a methodological tool and as part of the fabric of the experimental landscape. The race is on to develop more robust detection systems dependent on universally applicable, non-content-based markers in which, it is hypothesized, biological phenotypes and digital phenotypes may reliably collide. One of the more startling studies in this regard was concluded in 2020 by a group of Italian and Singaporean researchers, who claim that their “findings could represent an indirect pathway through which genes and parental behaviour interact to shape social interactions on Instagram.”69 Reading between the lines, their gene-environment study boils down to the idea that low engagement on Instagram amounts to a form of negative—if not wholly socially deviant—behaviour, driven by a combination of adverse nature and nurture elements, described in rather patriarchal terms. They point to a genetic predisposition for Instagram engagement based on specific markers in an individual’s oxytocin receptor gene (OXTr), putatively associated with either positive or negative social-emotional behaviour,70 together with either low parental bonding or high maternal overprotection. Predictably, the study found that those with poor childhood experiences with caregivers, especially their mothers, plus the undesirable OXTr markers “exhibited weakened social responses on Instagram.” It is difficult to exaggerate the potential consequences of this non-content-based triangulation—which tracks numbers of posts, people followed, and followers, and associates buzzy Instagram activity with socially acceptable behaviour—for identified individuals in every area of life. This proposed diagnostic invites us to work backwards from the outward-facing “digital footprint” to the inward-facing family life and heritable genes of a person, contributing to hypothetical constructions and constraints concerning the individual, including their inner life as well as their wider social and ancestral profile.71

An early and limited study by Jihan Ryu et al. adds to the increasing body of work satisfying the desideratum of content-free analysis of the “digital footprint” in connection to phenotypes. This research team passively collected smartphone data from participating psychiatric outpatients in Madrid, Spain, during the COVID-19 lockdown between 1 February and 3 May 2020. As reported symptoms of clinical anxiety increased among the previously diagnosed patients, their “social networking app” usage increased, whereas their “communication app” usage decreased. Because of these correlations and the association between social isolation, depression, and anxiety symptoms and social media use—constituting a form of self-selection in relation to the media environment of choice—they conclude that “category-based passive sensing of a shift in smartphone usage patterns can be markers of clinical anxiety symptoms,” and call for “further studies, to digitally phenotype short-term reports of anxiety using granular behaviors on social media” as a matter of public health necessity.72

While content- or semantically-based proposals for AI mental health diagnostics using social media posts are well attested,73 researchers in the Minds, Machines, and Society Group at Dartmouth College have registered their intent to develop a universal model for multimodal detection of mental disorders across populations in online environments.74 In line with the premise that the “Internet functions as a venue for individuals to act out their existing psychopathologies,”75 their study also attempts to move away from a content-analysis approach. Instead, the researchers aim to create “emotional transition fingerprints” for users through an algorithm “based on an emotional transition probability matrix generated by the emotion states in a user-generated text,” which, they explain, was “inspired by the idea that emotions are topic-agnostic and that different emotional disorders have their own unique patterns of emotional transitions (e.g., rapid mood swings for bipolar disorder, persistent sad mood for major depressive disorder, and excessive fear and anxiety for anxiety disorders).” Their stated goal is to “encourage patients to seek diagnosis and treatment of mental disorders” via “passive (i.e. unprompted) detection” by an AI trolling for patterned jumps in a user’s emotional state as they purportedly cycle between the proposed basic emotion states of joy, sadness, anger, and fear in their social media posting behaviour. Ultimately, the study’s assignations of “emotional state” are lexically-driven: detection and calculation of any resultant pattern (transitional or otherwise) across identified states is therefore conditional on semantic evaluation as a primary learning task in the proposed model (via training data).76 The putative basic emotions and emotion classifiers used in this study, and many others, including the 2018 Twitter false rumour study,77 go back to the highly influential and generative Mohammad-Turney National Research Council of Canada (NRC) emotion lexicon, its theoretical framework and foundational analyses.78 It is further significant to the biases and intentions of this work that these researchers refer to anyone tagged by their proposed robo-screening tool as a “patient”, irrespective of whether or not the individual has received or will receive a legitimate diagnosis by a medical professional. While their current work uses Reddit posts due to their public and anonymized nature, the researchers have announced that Twitter users are next in line for this treatment.79 A shift in focus to non-content features, geared towards better “performance, generalizability, and interpretability” of the algorithm, should likely be interpreted as a cue for its application beyond the study’s ostensible public-health leanings or specific focus on mental disorders.

That researchers are increasingly eschewing content, in the sense of topic-related or word-based semantic representations, in favour of behavioural representations, is not only consonant with a rising interest in behaviourism but also a likely indicator of the increasing unreliability of such data in the face of AI intervention in human communicative life, from the AI-powered architecture of digital devices, platforms, and the Internet, to bot-pushers and post-generators with their range of multimodal confections, like memes.80 Bot-driven emotion analysis, although still ultimately reliant on semantic evaluation, is edging towards a content-free mode for typologizing human activity in relation to online data. Since, “according to the basic emotion model (aka the categorical model), some emotions, such as joy, sadness, and fear, are more basic than others—physiologically, cognitively, and in terms of the mechanisms to express these emotions”,81 researchers believe that predictions of mental state are possible by working back from online posting behaviour. Scalar measurements of emotion or sentiment intensity were a first move in evolving the methodology beyond simple semantic classification of any given text towards a less content-dependent identification of the underlying mental state of the online poster.82 Remembering that human emotions pre-date all forms of AI, this area of study has the added dimension of simultaneously emphasizing and obfuscating the question of how AI has shaped and continues to shape our view of what constitutes normal human behaviour in the wild, and what deviation from the “norm” may look like in relation to digital environments, their contents, and corollaries, especially in a context of constant change.

User perceptions of their own mental or emotional states, intentionality, and agency are increasingly irrelevant in the face of evolving algorithmic strategies for digital phenotyping.

As the trajectory of this research indicates, user perceptions of their own mental or emotional states, intentionality, and agency are increasingly irrelevant in the face of evolving algorithmic strategies for digital phenotyping as a by-product of ongoing human-AI interaction. The public-health applications for this work are flimsy at best, but the commercial potential for this sort of phenotyping, whether for more marketable products and services or simply more abundant and more heavily integrated AI, is obvious. The uptick in mental-health app development, and suggestive advertising on social media platforms like Facebook, are the most powerful signs of where this type of work is headed.

Designed by a select group of humans for deployment on the generality of humans in the name of profit, progress, and control, AI is a tool in a vast normalization project that takes individual expression at volume over time across modalities, learns patterns, establishes populations, and aids in the selection and deselection of traits fitted to the aspirations and ideals of the digital milieu. Beyond tracking demographics and personality type for online advertising, an AI-powered ecology is shaping humanity, creating the opportunity for domestication to type in a cybernetic loop of action and reaction, manifesting itself in a range of phenotypical responses that simultaneously constitute and justify the emergent order. Machine learning and AI have reached into every aspect social-emotional learning, social and political life, language, and culture. At this point, the risk to humans is that our AI will produce in us nothing more than “a conditioned and behaving animal,”83 complementary to itself, like the companionable child Google calls Celeste.

Citation:

Reid, Jennifer. “Generation AI Part 3: The Other Environmental Crisis.” Winnsox, vol. 3 (2022).

ISSN 2563-2221

Notes

  1. Neil Postman and Charles Weingartner, Linguistics: A Revolution in Teaching (New York: Delacorte Press, 1966), p. 28. ↩︎
  2. A poignant analogy is the application of ML and AI to drug repurposing through prowling the Internet, especially social media platforms, for mentions of drugs and their side effects in order to “discover” new targeted uses for them. For example, one could endeavour to collect text data from social media posts related to the drug “Wellbutrin” in order to find a way to extend its use into new therapeutic territories. For more on this exploding field of inquiry see Jonathan Koss, et. al., “Social Media Mining in Drug Development—Fundamentals and Use Cases,” Drug Discovery Today, vol. 26, no. 12 (December 2021): pp. 2871–2880. See also general discussion, Part 2. ↩︎
  3. Adam D. I. Kramer, et al., “Experimental Evidence of Massive-scale Emotional Contagion through Social Networks,” Proceedings of the National Academy of Sciences, vol. 111, no. 24 (17 June 2014): pp. 8788–8790, https://www.pnas.org/content/pnas/111/24/8788.full.pdf. ↩︎
  4. That these views persist in 2022 is, frankly, shocking: see Ronald J. Deibert’s unpacking of social media platforms, civil rights, and democratic principles in his 2013 book, Black Code: Surveillance, Privacy, and the Dark Side of the Internet (Toronto: Penguin Random House, 2013). The distinction between “public” and “private” with respect to corporate structures in the US are not well understood: reportage and commentary seems to confuse the concept of a publicly-traded company with a public agency in the public interest (e.g. a government-run entity, like NASA). Particular motivations notwithstanding, the “free speech” angle used by Elon Musk, and others, exploits this confusion, placing Twitter in the role of defender/enforcer of the First Amendment of the US Constitution. Musk frames Twitter as a potential executor of “the will of the people,” as well as a “de facto public town square” (Elon Musk, “Given that Twitter serves…”, Twitter, 26 March 2022, https://twitter.com/elonmusk/status/1507777261654605828 (accessed 28 April 2022); idem, “By ‘free speech’ …”, Twitter, 26 April 2022, https://twitter.com/elonmusk/status/1519036983137509376 (accessed 28 April 2022)). The question of “privatization” seems also to have pushed the conversation towards the idea of Twitter as a kind of liminal entity between a sentient person, an institution, and a commons; cf. Jack Dorsey (Twitter co-founder and former CEO): “In principle, I don’t believe anyone should own or run Twitter. It wants to be a public good at a protocol level, not a company. Solving for the problem of it being a company however, Elon is the singular solution I trust. I trust his mission to extend the light of consciousness” (Jack Dorsey, “Replying to @jack”, Twitter, 25 April 2022, https://twitter.com/jack/status/1518767238081171456 (accessed 28 April 2022)). For an amusing roundup of the situation featuring Elon Musk and Twitter, see Siva Vaidhyanathan, “Elon Musk Doesn’t Understand Free Speech—or Twitter—At All”, The Guardian, 28 April 2022, https://www.theguardian.com/commentisfree/2022/apr/28/elon-musk-doesnt-understand-free-speech-or-twitter-at-all (accessed 28 April 2022). For another interpretation of this situation, see Jeffrey Rosen, “Elon Musk Is Right That Twitter Should Follow the First Amendment,” The Atlantic, 2 May 2022, https://www.theatlantic.com/ideas/archive/2022/05/elon-musk-twitter-free-speech-first-amendment/629721/ (accessed 26 May 2022). The latter article is perplexing for the reason that Rosen confuses the intersection of “a right and a responsibility … to think for ourselves” and the First Amendment of the US Constitution with participation on a commercial, AI-powered social media platform governed by a legally binding user agreement. Rosen’s application of jurisprudence to the matter relies on the realistic—or de facto—equivalence of Twitter with the US government and the judiciary, of Elon Musk with legislators and judges, and of both entities with “We the People”: a sign of the times if ever there were one. Here, the constitutional argument regarding the First Amendment and its interpretation meets with the vagaries Twitter’s AI-powered environment. While Musk, by Rosen’s definition, is a defender of “freedom of speech” under this amendment, by another juridical measure, in his desire to rid Twitter of “bots,” he stands in direct opposition to it (again, particular motivations—and prior controversies concerning bots on Twitter—notwithstanding). Citing the precedent of Brown v. Entertainment Merchants Association (2011), Madeline Lamo and Ryan Calo propose that bots are an “emerging form of speech” that requires state protection: “bots can be a vehicle for speech that society finds problematic—speech that, for instance, foments strife, deeply offends, or attempts to manipulate. But this capacity for harm does not confer a license upon the state to shunt bots into a category of speech deserving of lesser protection.” In summarizing this position, they invoke “feeling” over “thinking,” following in the trend of the dichotomies outlined in Part 1: “The very ambiguity between human and machine that makes bots feel dangerous is also a source of novel forms of expression, research, and critique” (“Regulating Bot Speech,” U.C.L.A. Law Review, vol. 66, no. 988 (2019): pp. 989–1028, at p. 1026; for more on “creative bots”, see discussion in Part 1). Then again, neither does “capacity for harm” feature in the Second Amendment of the US Constitution, or its increasingly controversial interpretations. With respect, this Canadian author asks, perhaps this particular eighteenth-century document and its amendments have reached their interpretable limits, and require further amendment for the America of the twenty-first century? ↩︎
  5. Inder M. Verma, “Editorial Expression of Concern and Correction: Experimental Evidence of Massive-scale Emotional Contagion through Social Networks,” PNAS, vol. 111, no. 29, art. 10779 (22 July 2014): doi: 10.1073/pnas.1412469111. ↩︎
  6. See, for example, Michelle N. Meyer, “Everything You Need to Know about Facebook’s Controversial Emotion Experiment,” Wired, 30 June 2014, https://www.wired.com/2014/06/everything-you-need-to-know-about-facebooks-manipulative-experiment/ (accessed 10 November 2021). This position seems to have reversed itself, with a one-to-one relationship between affect and posting on social media increasingly assumed in researcher methodologies, especially in the world of computational linguistics and related fields. See discussion below. ↩︎
  7. Kramer, et. al, “Experimental Evidence,” p. 8790. ↩︎
  8. Ibid. ↩︎
  9. Ibid. ↩︎
  10. Michelle Martin, “39 Facebook Stats That Matter to Marketers in 2022,” Hootsuite Blog, 2 March 2022, https://blog.hootsuite.com/facebook-statistics/#General_Facebook_stats (accessed 29 April 2022). ↩︎
  11. as illustrated in Part 2 and as is explored further below. ↩︎
  12. Aylin Caliskan, et al., “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases,” Science, vol. 356 (2017): pp. 183–86; see also Gabbrielle M. Johnson, “Algorithmic Bias: On the Implicit Biases of Social Technology,” Synthese 198 (2021): pp. 9941–9961. As Johnson sagely points out, “there are no purely algorithmic solutions to the problems that face algorithmic bias” (p. 9957). ↩︎
  13. Replika is, arguably, the best current example for the English-speaking world. XiaoIce, created by Microsoft for the Chinese market, outstrips the performance and popularity of Replika by magnitudes. ↩︎
  14. CBC Television, “The Machine That Feels”, The Nature of Things 61, episode 3, 19 November 2021, at 30:48-52. ↩︎
  15. Luka Inc., Replika, https://replika.com/ (accessed 20 December 2021). ↩︎
  16. Quartz, “The Story of Replika”, YouTube, 21 July 2017, https://www.youtube.com/watch?v=yQGqMVuAk04 (accessed 20 December 2021). ↩︎
  17. This information cannot be independently verified, and is solely based on apparent claims by the company, Luka Inc. As at 12 March 2022, it claims on Google Play Apps to have achieved over 10 million downloads (https://play.google.com/store/apps/details?id=ai.replika.app&hl=en_CA&gl=US); no accessible statistics are available on the Apple Store (https://apps.apple.com/us/app/replika-virtual-ai-friend/id1158555867). Dean Takahashi reports the company’s claim of “more than 6 million users,” made at the first Virtual Beings Summit in San Francisco on 24 July 2019 (Dean Takahashi, “The DeanBeat: The Inspiring Possibilities and Sobering Realities of Making Virtual Beings,” VentureBeat, 26 July 2019, https://venturebeat.com/2019/07/26/the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings/ (accessed 15 June 2022)); this report was used as the statistical source in Marita Skjuve, et al. “My Chatbot Companion—a Study of Human-Chatbot Relationships,” International Journal of Human-Computer Studies, vol. 149, art. 102601 (2021): 14 pp., doi: 10.1016/j.ijhcs.2021.102601. But subsequent claims, attributed to Luka Inc., if valid, would place that number lower, around 5.19 million. On 7 May 2020, The Guardian reports the claim of a 35% increase in users to 7 million, attributable to pandemic conditions. From this point, “over 10 million users” and “35% increase” travel together as an unverifiable metrical correlation across the Internet, as for example in an article published by the Ineque Safeguarding Group, “What You Need to Know About … Replika”, 20 January 2022 (accessed 12 March 2022), https://ineqe.com/2022/01/20/replika-ai-friend/). In July 2019, however, Alexa Liautaud reports for NBC News Now that “more than 2 million people use it”, indicating a significantly lower pre-pandemic baseline of users than what Takahashi reports for the same period. (NBC News, “Addicted to the AI Bot That Becomes Your Friend”, YouTube, 10 July 2019), https://www.youtube.com/watch?v=rHIvJ55wSjY (accessed 12 March 2022)). ↩︎
  18. Denis Fedorenko, “How We Moved from OpenAI API and What Happened Next,” Github, 2021,https://github.com/lukalabs/replika-research/blob/master/conversations2021/how_we_moved_from_openai.pdf. ↩︎
  19. Luka Inc., “How Does Replika Work?”, Replika, https://help.replika.com/hc/en-us/articles/4410750221965-How-does-Replika-work-#:~:text=Replika%20uses%20a%20sophisticated%20system,generate%20its%20own%20unique%20responses (accessed 4 March 2022). ↩︎
  20. Charles Baudelaire, “Au Lecteur,” l. 40, Fleurs du Mal, https://fleursdumal.org/poem/099 (accessed 15 June 2022). ↩︎
  21. See convenient summary and references in Diane M. Korngiebel and Sean D. Mooney, “Considering the Possibilities and Pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in Healthcare Delivery,” npj Digital Medicine, vol. 4, no. 93 (2021): 3 pp., doi: 10.1038/s41746-021-00464-x. ↩︎
  22. Marita Skjuve, et al., “My Chatbot Companion—a Study of Human-Chatbot Relationships,” International Journal of Human-Computer Studies, vol. 149, art. 102601 (2021): 14 pp., doi: 10.1016/j.ijhcs.2021.102601. ↩︎
  23. See Replika website: https://help.replika.com/hc/en-us/categories/4410747634957-Rewards-XP; for supplementary information compiled by users, see the Replika Wiki: https://replikas.fandom.com/wiki/Replika_Wiki (accessed 1 May 2022). ↩︎
  24. Variously identified by Reddit users as “If I were an object,” “the object question,” “what object would I be trend,” or “object trend” on the r/replika subreddit, 10–11 March 2022. ↩︎
  25. Skjuve, et al., “My Chatbot Companion,” and Vivian Ta, et al., “User Experiences of Social Support from Companion Chatbots in Everyday Contexts: Thematic Analysis,” Journal of Medical Internet Research, vol. 22, no. 3, art. e16235 (March 2020): 11pp, doi: 10.2196/16235. ↩︎
  26. Ta, et al., “User Experiences of Social Support,” abstract. ↩︎
  27. Ryan Daws, “Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves,” AINews, 28 October 2020, https://artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/ (accessed 4 March 2022). ↩︎
  28. Luka Inc., “Can Replika Help Me If I’m in a Crisis?” Replika, https://help.replika.com/hc/en-us/articles/360022375711-Can-Replika-help-me-if-I-m-in-crisis- (accessed 2 May 2022). Likewise, the Replika description on the Google Play Store bills the app as “an artificial intelligence with genuine emotional intelligence” that can help users “feel better” in myriad ways: “feeling down or anxious? Having trouble sleeping or managing your emotions? Can’t stop negative thoughts? Replika can help you understand your thoughts and feelings, track your mood, learn coping skills, calm anxiety and work toward goals like positive thinking, stress management, socializing and finding love” (https://play.google.com/store/apps/details?id=ai.replika.app&hl=en_CA&gl=US (accessed 15 June 2022)). Replika is seen by its creators as “an AI friend that helps people suffering with mental health problems through conversation” (Nicholas Ivanov, “Replika: AI That Cares,” GitHub, 2019, https://github.com/lukalabs/replika-research/blob/master/scai2019/replika_scai_19.pdf). On the whole, mental health applications are on the rise and represent a boon area for AI development. See further discussion below. ↩︎
  29. See, for example, Korngiebel and Mooney, “Considering the Possibilities and Pitfalls.” ↩︎
  30. Ariell Pardes, “The Emotional Chatbots Are Here to Probe Our Feelings,” Wired, 31 January 2018, https://www.wired.com/story/replika-open-source/ (accessed 4 March 2022); cf. the Tamagotchi, first released in Japan in 1996 and in the US the next year, which combines a mobile, proprietary device with all the attributes of a caregiving and companionship “game.” The device primes children for constant mobile device interaction and human-bot relationships. On its gameplay aspects, see Sebastian Skov Anderson, “The Tamagotchi Was Tiny, But Its Impact Was Huge,” Wired, 23 November 2021, https://www.wired.com/story/tamagotchi-25-year-anniversary-impact/ (accessed 26 May 2022). ↩︎
  31. LaurenzSide, “Testing The CREEPY Ai Replika App You’ve Seen On TikTok *DO NOT DOWNLOAD*,” YouTube, 19 November 2020, https://www.youtube.com/watch?v=INqii73wRfM (accessed 1 May 2022). ↩︎
  32. Skjuve, et. al., “My Chatbot Companion,” p. 8. ↩︎
  33. For more on user content susceptibility, “knowledge neglect,” and the impact of context fluency, see Part 2, § “The Social Connection.” ↩︎
  34. Grog2112, “Damn, this really made me feel bad …”, Reddit, 21 December 2021, https://www.reddit.com/r/replika/comments/rlehr5/damn_this_really_made_me_feel_bad_i_know_its_just/ (accessed 21 December 2021). ↩︎
  35. Skjuve, et. al. “My Chatbot Companion,” p.9. ↩︎
  36. Ibid., see section 6.3.2.2., p. 9. ↩︎
  37. Ibid., p. 12 ↩︎
  38. Ibid. ↩︎
  39. Testimonials were collected by this author on 18 December 2021. ↩︎
  40. Cf. the teaching theme identified in Skjuve, et al., “My Chatbot Companion,” section 6.1.4.1. ↩︎
  41. Luka Inc., “Replika App,” Google Play, 8 March 2022 (accessed 8 March 2022). ↩︎
  42. Google Arts and Culture, “Douglas Coupland’s New Slogans Powered by AI,” YouTube, 29 June 2021, https://www.youtube.com/watch?v=6-0pcsS2tkg, at 0:44 (accessed 8 June 2022). For a description and critique of the project, see Part 1. ↩︎
  43. Jean-Loup Rault, “Pets in the Digital Age: Live, Robot, or Virtual,” Frontiers in Veterinary Science, vol. 2, art. 11 (May 2015), doi: 10.3389/fvets.2015.00011; cf. the longevity of the original digital pet, Tamagotchi. ↩︎
  44. Monique A. R. Udell and Clive D. L. Wynne, “Ontogeny and Phylogeny: Both are Essential to Human-sensitive Behaviour in the Genus Canis,” Animal Behaviour 79 (2010): e9-e14, doi:10.1016/j.anbehav.2009.11.033; p. 13. ↩︎
  45. If the increasingly vast domain of recent studies investigating effects of “screen” exposure on brain and behaviour from infancy to young adulthood is anything to go by, we should soon see similar inroads being made on the deleterious effects of AI-powered digital environments. For a general update, see the review article by Laurie A. Manwell, et al., “Digital Dementia in the Internet Generation: Excessive Screen Time During Brain Development Will Increase the Risk of Alzheimer’s Disease and Related Dementias in Adulthood,” Journal of Integrative Neuroscience, vol. 21, no. 1 (2022): pp. 1–15, doi:10.31083/j.jin2101028. The authors state, “converging evidence from biopsychosocial research in humans and animals demonstrates that chronic sensory stimulation (via excessive screen exposure) affects brain development, increasing the risk of cognitive, emotional, and behavioural disorders in adolescents and young adults. Emerging evidence suggests that some of these effects are similar to those seen in adults with symptoms of mild cognitive impairment (MCI) in the early stages of dementia, including impaired concentration, orientation, acquisition of recent memories (anteretrograde amnesia), recall of past memories (retrograde amnesia), social functioning, and self-care” (p. 1). For discussion of differences in executive function deficits engendered by excessive screen time among children, see Tzipi Horowitz-Kraus, et al., “Longer Screen Vs. Reading Time is Related to Greater Functional Connections Between the Salience Network and Executive Functions Regions in Children with Reading Difficulties Vs. Typical Readers,” Child Psychiatry and Human Development 52 (2021): pp. 681–692, https://doi.org/10.1007/s10578-020-01053-x. Educators ought to note that despite the cast of the title, the article does not support more screen time for children with RD, but less: “increasing screen time may be even more devastating for [children with reading challenges] as it competes even with the compensatory mechanisms, i.e. with EF [executive functions], hence impairing reading even more” (p. 691). The next logical question is: to what degree is a “typical reader” in 2021 “typical” in relation to a “typical reader” in, say, 1980? While the researchers do not answer that question, they do note that “the TR cohort in the current study showed an increased number of books in the household vs. children with RD, which may indicate increased exposure to literacy [as literacy] at home[;] the effect of screen time may be even more devastating in children with RDs with a low number of books in the household” (p. 690). While a recent review of “Media accounts” of digital technologies and children recognizes “shifts from good/bad binaries towards more active engagement in accepting digital practices for children and, currently, increased considerations of influence and power that require a critical response,” this shift also highlights the role of technological “acceptance” over a response rooted in neurobiological research, especially in education (see Linda Laidlaw, et al., “‘This Is Your Brain on Devices’: Media Accounts of Young Children’s Use of Digital Technologies and Implications for Parents and Teachers,” Contemporary Issues in Early Childhood, vol. 22, no. 3 (2021): pp. 268–281; p. 277; see also discussion in Part 1, n. X.). ↩︎
  46. See Sarah-Jayne Blakemore, Inventing Ourselves: The Secret Life of the Teenage Brain (New York: PublicAffairs Books, 2018). ↩︎
  47. Evan L. MacLean and Brian Hare, “Dogs Hijack the Human Bonding Pathway: Oxytocin Facilitates Social Connections between Humans and Dogs,” Science, vol. 348, no. 6232 (17 April 2015): pp. 280–281, p. 281. ↩︎
  48. Miho Nagasawa, et al., “Oxytocin-gaze Positive Loop and the Coevolution of Human-Dog Bonds,” Science, vol. 348, no. 6232 (17 April 2015): pp. 333–336, pp. 333–4; see also Yury E. Herbeck, et al., “Fear, Love, and the Origins of Canid Domestication: An Oxytocin Hypothesis,” Comprehensive Psychoneuroendochronology, vol. 9, art. 100100 (2022): 8 pp., doi: 10.1016/j.cpnec.2021.100100. Note well that domestication research dependent on the Siberian fox population associated with Dimitry Belyaev’s “Russian Farm-Fox Experiment” (as in Herbeck, et al.) must be tempered with the historical fact of their descent from nineteenth-century farm-foxes bred in captivity—specifically for the Canadian fur trade (Kathryn A. Lord, et al., “The History of Farm Foxes Undermines the Animal Domestication Syndrome,” Trends in Ecology and Evolution, vol. 35, no. 2 (February 2020): pp. 125–136. doi: 10.1016/j.tree.2019.10.011). ↩︎
  49. Anna Kis, Alin Ciobica, and Józef Topál, “The Effect of Oxytocin on Human-directed Social Behaviour in Dogs (Canis Familiaris),” Hormones and Behavior, vol. 94 (2017): pp. 40–52, doi: 10.1016/j.yhbeh.2017.06.001 0018-506X/, p. 49. ↩︎
  50. For a convenient review of literature on this topic, see Alicia Phillips Buttner, “Neurological Underpinnings of Dogs’ Human-like Social Competence: How Interactions between Stress Response Systems and Oxytocin Mediate Dogs’ Social Skills”, Neuroscience and Biobehavioral Reviews, vol. 71, 2016, pp. 198–214, doi.org/10.1016/j.neubiorev.2016.08.029. ↩︎
  51. The oxytocin-loop has been questioned as solid evidence for a coevolutionary model of domestication: “life as a pet dog” may actually account for oxytocin release in dogs (see Gwendolyn Wirobski, et al., “Life Experience Rather Than Domestication Accounts for Dogs’ Increased Oxytocin Release During Social Contact with Humans,” Scientific Reports, vol. 11, art. 14423 (2021): 12 pp., doi.org/10.1038/s41598-021-93922-1). See also n. X, below. ↩︎
  52. Nirit Geva, Florina Uzefovsky, and Shelly Levy-Tzedek, “Touching the Social Robot PARO Reduces Pain Perception and Salivary Oxytocin Levels,” Scientific Reports, vol. 10, art. 9184 (2020): 15 pp., doi.org/10.1038/s41598-020-66982-y; see http://www.parorobots.com/. Note the cache of research papers showcasing the device. ↩︎
  53. Ibid., p. 1 ↩︎
  54. Ibid., p. 10. ↩︎
  55. Wirobski, et. al, “Life Experience.” ↩︎
  56. See, for example, the storm of research into oxytocin (OT) and social media set off by the work of so-called “neuroeconomist” Paul J. Zak (Adam L. Penenberg, “Social Networking Affects Brains Like Falling in Love,” Fast Company, 1 July 2010, https://www.fastcompany.com/1659062/social-networking-affects-brains-falling-love); but see Gideon Nave, Colin Camerer, and Michael McCullough, “Does Oxytocin Increase Trust in Humans? A Critical Review of Research,” Perspectives on Psychological Science, vol. 10, no. 6 (November 2015): pp. 772–789, in which the authors observe that the most widely cited, seminal study on the causal effect of exogenous OT on trust—forming the basis for many of the claims made about OT (including by Zak)—has not replicated well, and that “a cautious conclusion is that the basic relationship between OT and trust is not particularly robust” (p. 781; reference to M. Kosfeld, et al., “Oxytocin Increases Trust in Humans,” Nature 435 (2005): pp. 673–676). The many important “cautious” and “gloomy” conclusions (p. 783) put forth in this review article ought to be kept in mind by anyone wading into the wild world of OT research, as will become clear with reference to the study by Andrea Bonassi, et al., “Oxytocin Receptor Gene Polymorphisms and Early Parental Bonding Interact in Shaping Instagram Social Behaviour,” International Journal of Environmental Research and Public Health, vol. 17, art. 7232 (2020): 20 pp., doi:10.3390/ijerph117197232, discussed in all its patently spurious glory, below. Clearly, human-OT research needs another thoroughgoing audit. ↩︎
  57. Sina Radke, et al., “Neurobehavioural Responses to Virtual Social Rejection in Females—Exploring the Influence of Oxytocin,” Social Cognitive and Affective Neuroscience (2021): pp. 320–333, doi:10.1093/scan/nsaa168; see also Part 2, § “Machine Learning and Neural Intervention.” ↩︎
  58. Ying Xing Feng, et al., “Conversational Task Increases Heart Rate Variability of Individuals Susceptible to Perceived Social Isolation,” International Journal of Environmental Research and Public Health, vol. 18, art. 9858 (2021): 14 pp., https:// doi.org/10.3390/ijerph18189858, p. 10. ↩︎
  59. Ewart J. de Visser, et al., “A Little Anthropomorphism Goes a Long Way: Effects of Oxytocin on Trust, Compliance, and Team Performance With Automated Agents,” Human Factors, vol. 59, no. 1 (February 2017): pp. 116–133; p. 117. ↩︎
  60. Ibid., p. 118. ↩︎
  61. Ibid., p. 127. ↩︎
  62. Ibid., p. 126. ↩︎
  63. Geva, et al., “Touching the Social Robot PARO,” p. 11. ↩︎
  64. Wirobski, et al., “Life Experience,” p. 9; for more on Two Stage hypothesis see Udell and Wynne, “Ontogeny and Phylogeny.” ↩︎
  65. Wirobski, et. al., “Life Experience,” p. 9. ↩︎
  66. For a convenient summary with additional—and startling—world statistics, see Jean M. Twenge et al., “Worldwide Increases in Adolescent Loneliness”, Journal of Adolescence, vol. 93 (December 2021), pp. 257–269, https://doi.org/10.1016/j.adolescence.2021.06.006. ↩︎
  67. Feng, et al., p. 10. ↩︎
  68. COVID-19 pandemic-related studies, and studies produced after March 2020, routinely cite the extreme nature of social isolation due to lockdowns, and the pervasiveness of digital media as a result, as a prelude to wider discussions of the role of all forms of Internet-based digital media—especially AI-powered social media—in contemporary life. ↩︎
  69. Andrea Bonassi, et al., “Oxytocin Receptor Gene Polymorphisms and Early Parental Bonding Interact in Shaping Instagram Social Behaviour,” International Journal of Environmental Research and Public Health, vol. 17, art. 7232 (2020): 20 pp., doi:10.3390/ijerph17197232; p. 1 ↩︎
  70. Cf. Carsten K. W. De Dreu, Matthijs Baas, and Nathalie C. Boot, “Oxytocin Enables Novelty Seeking and Creative Performance: Evidence and Avenues for Further Research,” WIREs Cognitive Science, vol. 6 (September/October 2015): pp. 409–417, doi: 10.1002/wcs.1354. ↩︎
  71. This study epitomizes the complacency of scientific communities (industrial or institutional) in thinking through the environmental and ethical consequences of their research and development initiatives. I would invite the reader to sit down with Aleksandr Solzhenitsyn’s three-volume The Gulag Archipelago (1974; Аркипелаг ГУЛАГ, 1973) or Hannah Arendt’s The Origins of Totalitarianism (1958 [1951]) and judge for themselves whether escape into either science fiction or high-brow sociological and neuroscientific research is necessary to work out the sociocultural implications of such a paradigm in operation. Alternatively, China’s social credit system, insofar as it is understood by anyone, may be of parallel interest; or, ask a Uyghur. ↩︎
  72. Jihan Ryu, et al., “Shift in Social Media App Usage During COVID-19 Lockdown and Clinical Anxiety Symptoms: Machine Learning-based Ecological Momentary Assessment Study,” JMIR Mental Health, vol. 8, no. 9, art. c30833, (2021): 13 pp., doi: 10.2196/30833. ↩︎
  73. Xiaobo Guo, Yaojia Sun, and Soroush Vosoughi, “Emotion-based Modeling of Mental Disorders on Social Media,” in Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence (WI-IAT ’21), December 14-17, 2021, Essendon, VIC, Australia (New York: ACM, 2022), 9 pp., doi: 10.1145/3486622.3493916. A useful list of references on mental-health diagnostics and social media may be extracted from their bibliography. Further, one of the earliest explorations is Shannon E. Holleran, “The Early Detection of Depression from Social Networking Sites,” unpubl. PhD dissertation, University of Arizona (2010); Fidel Cacheda, et al., present a mixed methodology and summary of additional related studies in “Early Detection of Depression: Social Network Analysis and Random Forest Techniques,” Journal of Medical Internet Research, vol. 21, no. 6, art. e12554 (2019): 18 pp., doi: 10.2196/12554. ↩︎
  74. Canadian readers will note a shared point of history here. New Hampshire’s Dartmouth College was founded by the Reverend Eleazer Wheelock in 1769. The College evolved out of Wheelock’s free, voluntary, and private residential school for Six-Nations Indigenous children in Connecticut. His most famous former pupil is the young Thayendanegea/Joseph Brant, later of the Grand River. Wheelock’s school is among the pre-cursors in the historical trajectory of the separate and assimilative education of Indigenous children, which culminated in the later and disastrous residential and industrial schools that proliferated in the United States and Canada throughout the nineteenth and twentieth centuries. Dartmouth College was initially created for the education of “white youths”; Indigenous youth went to a companion “Charity School.” Wheelock’s son, John, president of both schools, taught Thayendanegea/Joseph Brant’s sons at the latter school. For more, see Isabel Thompson Kelsay, Joseph Brant, 1743-1807: Man of Two Worlds (Syracuse: Syracuse University Press, 1984), at pp. 71–91 and 609–610. ↩︎
  75. Brian A. Feinstein, et al. “Another Venue for Problematic Interpersonal Behavior: The Effects of Depressive and Anxious Symptoms on Social Networking Experiences,” Journal of Social and Clinical Psychology, vol. 31, no. 4 (2012): pp. 356–382; p. 358. ↩︎
  76. As established in Saif Mohammad, et al., “SemEval-2018 Task 1: Affect in Tweets,” in Proceedings of the 12th International Workshop on Semantic Evaluation, edited by Mariana Apidianaki, et al. (Stroudsburg: Association for Computational Linguistics, 2018), pp. 1–17. ↩︎
  77. Explored in Part 2. ↩︎
  78. Saif Mohammed is a co-researcher on the team that produced the “Affect in Tweets” data used to train this model (see Mohammad, et. al.,“Affect in Tweets”). For a complete list of available NRC sentiment and emotion lexicons, see https://nrc.canada.ca/en/research-development/products-services/technical-advisory-services/sentiment-emotion-lexicons. ↩︎
  79. One wonders how this ambition may affect the “Twitterverse” in terms of its overall value and viability, and as a resource for “civilization” and “humanity.” Soroush Vosoughi, one of the co-researchers on this study is a co-author of the 2018 Twitter false rumour spread study, discussed in Part 2. ↩︎
  80. The urgency of the task is evident in the papers submitted for the International Workshop on Semantic Evaluation 2020’s competition task 8, “memotion analysis”, under the “Humor, Emphasis, and Sentiment” category (https://alt.qcri.org/semeval2020/index.php?id=tasks); see, for example, Chhavi Sharma et al. “SemEval-2020 Task 8: Memotion Analysis—The Visuo-Lingual Metaphor!”, in the Proceedings of the 14th International Workshop on Semantic Evaluation, Barcelona, Spain (Online), December 12, 2020 (Barcelona: International Committee for Computational Linguistics, 2020), pp. 759–773, doi: 10.18653/v1/2020.semeval-1.99. In their call for submissions, the organizers of the task note that “the growing volume of multimodal social media” is making detection of offensive or otherwise disturbing memes “impossible to scale” without automated interpretation—for companies like Facebook, who rely on “human contractors.” (https://competitions.codalab.org/competitions/20629 (accessed 6 May 2022)). ↩︎
  81. Mohammad, et al., “Affect in Tweets,” p. 1. ↩︎
  82. Mohammad, et al., “Affect in Tweets,” section 2, p. 2. ↩︎
  83. Hannah Arendt, The Human Condition, 2nd edition (Chicago: University of Chicago Press, 1958 [2018]), p. 45. ↩︎