school of x in connection with xCoAx

class of 2023

01 Artificial Intelligence as a Theatre Director
02 Prototype Warfare: The Necropolitics of Late Stage Techno-Imperialism
03 F*cking with the Virtual
04 Melodia Atomizacji
05 Open: A Pan-ideological Panacea, a Free Floating Signifier
06 Analogue, Anomalous, Amorphous: The Creative Possibilities of Computation beyond Technocapitalism
07 The Atlas of Dark Patterns: Charting New Spaces of End User Consent
08 Re-Valuing RS Through Configure-Able Methods
09 Computing atmospheric attunement and hybrid listening through Augury and Scrying
10 Organum Paradoxum/Scalptomorpha: A Sculptural Parasite Plug-in to Hack the Human Anatomical System
11 A Plague in Cyberspace: The Importance of Being-on-Line
12 A Cocreative Computational Approach to Musical Analogy
13 Creating with Marine Fish: Interspecies Architecture as a Communication Tool

01 Artificial Intelligence as a Theatre Director

Oreste Campagner 1,
1 Universität Konstanz, Konstanz, Germany
oreste.campagner@uni-konstanz.de

Abstract

Despite numerous applications in live performance, Artificial Intelligence (AI) seems to be employed as a passive element, not deciding nor actively shaping what happens on stage. I explore the possibility of AI as an active element, able to direct performance. To push this scenario to its limits, I suggest implementing AI in the “eterodirezione” (hetero-direction) theatre device, where actors are given their part through in-ear monitors. I discuss the two possible outcomes of the implementation, “asynchronous direction” and “synchronic direction”, and I focus on the latter, where AI composes dramaturgy and instructs the actors in real time. I finally discuss the major outcomes of this new device, concerning the nature of the performance, the human-machine interface and the role of the director and actors. AI and actors prompt each other, improvising in a prompting loop, where they both become co-directors and co-performers of the performance, whereas the human director acts as a demiurge-like figure. This contribution is thought of as the first theoretical framework in the process of creating a new theatre language where AI is an active element of performance.

Keywords

Artificial Intelligence, Theatre, Hetero-Direction, Synchronic Direction, Prompting Loop

Intro

Automation and machinery have long fascinated theatre-makers and found extended applications. In Stifter Dinge1 (2007), Heinrich Goebbel realised a fully automated live music performance staging complex machinery, which led scholars to speak of “Robot Opera” (Sigman 2019). In 2020, the Copernicus Science Centre in Warsaw opened the first “Robotic Theatre” in the world2.

Similarly, AI has drawn the attention of performance artists. In February 2021, “the first computer-generated theatre play” was staged as part of the THEaiTER project3 (Rosa 2020). Numerous other examples of applications have already appeared, where employed databases are either pre-existing, specifically designed for the performance or generated by discretisation of scenic elements. Among these are Corpus Nil4 (2016) by Marco Donnarumma, and Discrete Figures5 (2018) by Elevenplay, Rhizomatiks Research and Kyle McDonald. In the former, AI synthetises sonic elements after elaborating signals collected via microphones and electrodes installed on the performer’s body. In the latter, human performers interact onstage with pre-rendered 3D figures whose movements are generated by AI after the elaboration of a video database of human performers’ dance sequences (Befera and Bioglio 2022). In her interactive meta-drama AI6 (2021), Jennifer Tang stages over multiple evenings the creation of a play through GPT-3 (Akbar 2021).

In the above examples, AI seems to be employed as a mere tool, namely as a pre-scenic instrument to compose a dramaturgy or a movement sequence (as in the THEaiTRE project, AI, and Discrete Figures) or as an on-stage complement translating into sound, light or scenography (as in Corpus Nil). AI outputs are therefore either controlled and preventively collocated in the global dramaturgy or allowed to have an impact on the performance in real-time within pre-set limits. This is consistent with the ongoing debate on Generative Art, tending to refute AI autonomy in the creative process (Hertzmann 2018). The concept of “Meaningful Human Control” (MHC) applied to Generative Art (Akten 2021) strengthens this view, configuring “generative AI as a tool to support human creators”, who are the only agents accountable for the creative output (Epstein 2023).

Nevertheless, I argue that in Generative Art, AI seems to exhibit a certain degree of autonomy. After an initial prompt by the human user, the AI software is responsible for the image synthesis: what happens “inside the machine” cannot be monitored and the output cannot be foreseen. AI exhibits active features. In theatre, no examples have yet emerged where AI is employed as an active element shaping, creating, and ultimately directing the theatre performance. The above-mentioned projects show in fact that AI can contribute with a certain degree of agency and unpredictability, for example synthesising sonic elements, but cannot fully decide what happens onstage. A notable exception is represented by the Improbotics experiment7, an improvising device where AI provides lines to the actors. However, the presence of “free-will improvisers” not receiving lines from AI, and the restriction of AI intervention to textual dramaturgy eventually limit in my view its autonomy on stage.

In this essay, I address the question of whether and to what extent AI can have an active role in theatre performance, in the sense outlined above. To answer this question, a way of introducing AI in theatre performances needs to be found, that results in AI shaping the performance and maximizing its agency. I identify an already existing performance device to be implemented with AI and discuss the main consequences arising from this operation. My contribution aims to serve as a theoretical attempt to framework AI implementation in theatre and to shed light on some interesting aspects: concerning AI research, it helps to clarify the role of prompts and embodiment in the human-machine interface; from a theatrical standpoint, it helps to reaffirm the role of the human in a world progressively centred on virtuality. Ultimately, this essay is intended as a “declaration of intent” towards the creation of a new language in the performing arts, with further theoretical and practical studies intended as the natural follow-up.

Hetero-direction

I suggest as a base of exploration the method of “eterodirezione” (“hetero-direction”) invented and developed by the Italian experimental company Fanny&Alexander. Hetero-direction consists “in having the performer receive the directions for his or her part in the scene through an earpiece” (Di Bari 2021). The performer plays “lines and actions ‘administered’ by the directing division” without having to learn them by heart and without knowing the exact order of their part. This results in preventing “the routine and repetitiveness” inbuilt into the theatrical act (Margiotta 2020, my translation).

A closer look at the method provides useful insights into the possibility of AI implementation. I identify three phases of the creative process, summarised in the scheme of Figure 1, which will serve as a reference for the next sections. As it is common in theatre practice, “dramaturgy” can hereby refer to a broad variety of performance components, ranging from text to music, light, and space design. To point out the distinctive features of hetero-direction, I will henceforth limit myself to the dramaturgy as text and as a sequence of movements or physical actions.

  1. In the first step, the dramaturgy is composed by the playwright and/or director and/or actors as a textual and/or physical score. The latter usually takes the form of a detailed (textual) list of precisely (physically) coded gestures, with a correspondent identifier name, which the actors will memorize as a physical “vocabulary”.

  2. The score is then rehearsed and refined with the interpreters.

  3. During the live performance, staged by the director, the set of instructions is transmitted to the actors through earphones, either through live dictation or as a pre-recorded track, e.g. with instructions for gestures recorded on the L-channel and instruction for lines on the R-channel (Margiotta 2020).

A picture containing text, screenshot, diagram Description automatically generated
Figure 01.Scheme of the hetero-direction method, showing the parts of the creative process and the agents responsible for each of those

In my view, introducing AI may result in further developments of the device and in AI exhibiting an active role in live performance. I argue that AI can be introduced in hetero-direction in two ways:

Asynchronous and Synchronic Direction

In asynchronous direction (see Figure 2), the hetero-direction scheme outlined in the previous section undergoes no relevant changes: the dramaturgy is composed in the first step, rehearsed, and fine-tuned by the director and actors and eventually transmitted onstage to the interpreters. The only difference consists in the implementation of AI in the first step. As we have seen in Section 1, both the composition of a play and of sequences of movements by AI have been proven possible. Nevertheless, in asynchronous direction, the result of AI implementation is analogue to that encountered in THEaiTRE or in Discrete Figures: AI is employed as a tool and does not actively direct the performance, which is ultimately staged by the human directing division. Moreover, it is likely the AI component would not easily be noticeable to the audience since the staging is entirely human-driven. For these reasons, I consider this option less interesting and will focus on the second one.

A screen shot of a black screen Description automatically generated with low confidence
Figure 02.Scheme of the asynchronous direction device

Since the dramaturgy is composed by AI onstage, in synchronic direction (see Figure 3) the original scheme is drastically changed, although the three moments of design, tuning and staging are preserved:

  1. In the first step, the device needs to be designed and set into place. This means programming and training an AI model to detect and react to all the scenic variables while composing new dramaturgy and instructing the actors. A whole technological apparatus needs to be put in place, allowing AI to perceive what is happening onstage, for example through a scene-mapping mechanism made up of microphones, cameras, and sensors. AI could therefore recognize which positions are occupied onstage and react with an “if…then” logic, such as: if actor 1 is in A, then actor 1 goes to C and says “potato”.

  2. In the second step, the functioning of the whole device is tested and fine-tuned by the director and through interaction with actors. Given the complexity of the apparatus, steps (i) and (ii) are likely to be performed in a loop every time a major difficulty arises during rehearsals, until the device is stable and does not jam.

  3. The third step consists of the final performance. Here, staging and dramaturgy creation overlap. AI composes the dramaturgy and gives instructions to the actors, which perform them, providing in turn material for the AI to react.

A picture containing text, screenshot Description automatically generated
Figure 03.Scheme of the synchronic direction device

The uniqueness of this scenario is evident: the AI is not anymore employed as a tool or a complement but is actively shaping the live performance, to the extent that it could be argued to be the actual director. This scenario entails several consequences on both a theoretical and practical level, some of which I am addressing in the next sections. I will focus on the nature of a performance realised through synchronic direction, the role of the human component, prompts and creativity.

Prompting Loops, Co-workers, and Demiurges

Fabian Offert points out two interesting commonalities between theatre and machine learning (ML) (Offert 2019). On the one hand, they both “focus on discrete state transformations”, i.e. the transitioning between fixed states (be they machine states or those resulting from the composition of scenic elements) inside a “black-box assemblage” corresponding to the mise-en-scene in theatre and to the machine set-up in ML. On the other hand, they must both deal with an external singularity element they need to make sense of. Some theatre states are in fact “probabilistic”, insofar as they are affected by an external influence, such as improvisation or the presence of the audience. ML must instead extract “probability distributions from […] real-world data”. Accordingly, they are both connecting their “black-box” assemblage with the outside world, to “make sense” of it. Quoting Offert:

What theater and machine learning have in common is the setting up of an elaborate, controlled apparatus for making sense of everything that is outside of this apparatus: real life in the case of theater, real life data in the case of machine learning.” (Offert 2019, his italics).

In synchronic direction, not only are theatre and ML exhibiting this common feature, but they are also connected in a loop. This is clear when considering the nature and role of the prompt in this device. In Generative AI models, prompts usually consist of text lines which trigger a cascade of untraceable AI processes and result in a textual or image output. In our case, instead, the prompt is a physical component that is captured and quantised by the technological apparatus: the actors’ blood pressure or voice, a position in the space, etc. In a word, the body is here the prompt. The process, though, does not end here. AI elaborates the input, synthesises new elements of dramaturgy and instructs, i.e. prompts, in turn, the actors. Synchronic direction works therefore as a prompting loop.

In such a system, what is the singularity element pointed out by Offert? I argue that theatre and ML constitute the singularity source of one another. For theatre (the actors), singularity comes from the untraceable processes of AI and the unpredictability of its output. For ML, singularity comes from theatre and from all the external elements that can influence a theatre performance. This is where the role of the actor becomes clear. One could argue that synchronic direction flattens human creativity and reduces the actor to a “mere tool”, a puppet manipulated by AI. On the contrary: the actors bring into the device all the unpredictability inherent to their being human. As every dancer moves in the same choreography in a personal way, interpreting the pre-established sequence of movements through their own peculiar sensibility and body, so in synchronic direction will the actors perform AI instructions according to their own individuality. Moreover, their creativity will be solicited by the unpredictability of AI outputs. The same AI prompt results in different human outputs depending on who is performing it: different reactions will result in different quantised signals, triggering AI in different ways. As a result, human creativity is preserved, if not enhanced, and actors and AI are co-improvising, co-directing, and co-performing together. Coherently with the concept of a prompting loop, AI is not only directing but also performing and actors are not only performing but also directing.

It is worth noting that the prompting loop also highlights the uniqueness and irreplaceability of the human onstage. A synchronic direction device where no actors were involved, or where actors were replaced by robots, would arguably have no singularity other than the audience element or unlikely external events. This would result in the AI prompting itself or some entities designed to repeat the same task mechanically, always in the same way. Without a human being onstage, the performance would soon lose its driving force, which ultimately lies in the unpredictability of the human.

The role of the human director in synchronic direction is left to discuss. It appears that, in a prompting loop, no place for a director is envisaged. Indeed, I argue that the role of the human director is mostly limited to the design and rehearsing phase and is of a different nature than the conventional one (i.e. the “omniscient creator” of the performance). To recall the analogy with Generative AI models, the human director acts in our case as a hybrid between a technologist, an engineer, and a supervisor. They are the figure who coordinates the apparatus setup, verifies its functioning and its interactions with the actors and set the limits for the performance staging. To prevent the performance from degenerating into a non-sensical improvisation, rules need in fact to be established, concerning for example the overall dramaturgical context or “topic”, the rhythm and state transition frequency, the allowed thresholds of sound and light effects, etc. Retrieving a definition often encountered in the history of theatre direction, the human director acts here as a clock-maker or as a “kind of a demiurge” (Artaud 1958). Nevertheless, the director has no standing in what happens onstage and what the audience is going to watch, since only the two elements of the prompting loop, the actors and AI, are co-directing and co-performing onstage.

Outro

In this essay, I attempted to frame theoretically the implementation of AI in theatre and to set the ground for practical experiments. My interest was to understand to what extent AI can become an active and directing entity in live performances. To push this possibility to the limit, I proposed to implement AI in the “hetero-direction” device and focused on one of the possible outcomes, which I called “synchronic direction”. Here, AI composes dramaturgy and instructs the actors in real-time: AI can indeed direct a performance actively. A closer look at this system helps to clarify the role of the human in the device and, overall, in live performances. Considering the prompting loop: (i) human unpredictability appears to be an unavoidable element of performances; (ii) creativity is preserved, if not enhanced; (iii) actors and AI co-improvise, co-direct and co-perform onstage; (iv) the human director represents a demiurge-like figure.

For clarity purposes, however, dramaturgical elements such as space, light and sound design were not considered and shall be included in further developments of this model. The role of improvisation, which appeared to underlie the human-AI onstage interaction, is also to be clarified. Moreover, given the technical complexity of this device, the next step would consist in ascertaining its feasibility and testing it with some initial experiments.

References

Akbar, Arifa. 2021. "Rise of the robo-drama: Young Vic creates new play using artificial intelligence." theguardian.com. 24 August. Accessed June 23, 2023. https://www.theguardian.com/stage/2021/aug/24/rise-of-the-robo-drama-young-vic-creates-new-play-using-artificial-intelligence.

Akten, Memo. 2021. Deep visual instruments: realtime continuous, meaningful human control over deep neural networks for creative expression. PhD thesis, Goldsmiths, University of London.

Artaud, Antonin. 1958. The Theatre and Its Double. Translated by Marie Caroline Richards. New York: Grove Press, Inc.

Befera, Luca, and Livio Bioglio. 2022. “Classifying Contemporary AI Applications in Intermedia Theatre: Overview and Analysis of Some Cases.” CREAI@AI*IA.

Di Bari, Francesca, et al. 2021. "A Journey of Theatrical Translation from Elena Ferrante's Neapolitan Novels: From Fanny & Alexander's No Awkward Questions on Their Part to Story of a Friendship (Including an Interview with Chiara Lagani)." MLN 136 (1).

Epstein, Ziv et al. 2023. "Art and the science of generative AI: A deeper dive." https://arxiv.org/abs/2306.04141.

Hertzmann, Aaron. 2018. “Can Computers Create Art?” Arts 7 (2): 18.

Margiotta, Salvatore. 2020. "La pratica dell’eterodirezione nel teatro di Fanny & Alexander." Acting Archives Review X (20).

Offert, Fabian. 2019. “What Could an Artificial Intelligence Theater Be?” Fabian Offert's Blog. 12 April. Accessed June 23, 2023. https://zentralwerkstatt.org/blog/theater.

Rosa, Rudolf, et al. 2020. "THEaiTRE: Artificial intelligence to write a theatre play." arXiv preprint arXiv 2006.14668.

Sigman, Alexander. 2019. "Robot Opera: Bridging the Anthropocentric and the Mechanized Eccentric." Computer Music Journal 43 (1): 21–37.

Footnotes

  1. Source: https://www.heinergoebbels.com/works/stifters-dinge/4 (accessed June 23rd, 2023)

  2. Source: https://culture.pl/en/event/robots-perform-lems-prince-ferrix-and-princess-crystal (accessed June 23rd, 2023)

  3. Source: https://www.theaitre.com/ (accessed June 23rd, 2023)

  4. Source: https://marcodonnarumma.com/works/corpus-nil/ (accessed June 23rd, 2023)

  5. Source: https://research.rhizomatiks.com/s/works/discrete_figures/en/ (accessed June 23rd, 2023)

  6. Source: https://www.youngvic.org/whats-on/ai (accessed June 23rd, 2023)

  7. Source: https://improbotics.org/ (accessed June 23rd, 2023)

02 Prototype Warfare: The Necropolitics of Late Stage Techno-Imperialism

Jimi DePriest 1,
1 Masters of Fine Arts Student, University of Western Australia, Perth, Australia
jimi.depriest@research.uwa.edu.au

Abstract

This essay employs necropolitical theory to investigate the imperative of late stage techno-imperialism to prevent its own collapse through establishing prototype warfare as a new model for military production. Research projects to develop Unmanned Aerial Vehicles (UAVs) with Automatic Target Recognition (ATR) capabilities which have marked the initiation of prototype warfare will be examined to address the necropolitical notions of sovereignty embedded in the operational function of autonomous weapons as an extension of neoliberal state power. The central thesis of this essay aims to articulate the importance of positioning the critical role that IBM punch card technology played in automating the nazi holocaust as a core historical precedent for the production of autonomous weapons systems.

Keywords

Prototype Warfare, Techno-Imperialism, Autonomous Weapons, Automation Technology, Necropolitics, Sovereignty, Execution

Intro

This essay will explore the emergent necro political terrain of late stage techno-imperialism, placing focus on aims set by the defense industry to apply rapid developments in automated weapons technologies to live combat scenarios on an experimental basis for the sake of optimizing capital accumulation in a practice known as prototype warfare. Techno-imperialism is conceptualized as the successor to techno-colonialism, which has dilated the scope of its extractive technologies to globally expand its reach of power. As techno-colonialism is predicated on the capitalist state having a severely asymmetrical concentration of control over technological production with which to exert hegemonic political dominance, it evolves into techno-imperialism as the profit gained from its extractive forces begins to stagnate or decline and thus requires new territories and modes of extraction to maintain the relevance and authority of this parasitic system. (McElroy, 2019) Technological research and development has historically been funded and directed by the military, with its operational functions driven towards the mandates of wartime production. (Edwards, 1997) Prototype warfare presents a novel phenomenon in this economic paradigm as it signifies a new era of warfare where the pursuit of capital to be gained through technological production has entirely superseded the political agendas underlying military invasion. Historically, preconceived ideological motivations, geopolitical conflicts and struggles for command over foreign resources preceded the development of new military technologies to be used as thoroughly considered strategic aids for the advancement of the neoliberal political program. The prototype paradigm evolves this dynamic as it centers technological innovation as the primary ideological motive propelling wartime operations as the profits to be gained from catalyzing advancements in the automation industry have become the next frontier of capitalist conquest. Through examining the ideological objectives and historical conditions that led to prototype warfare, I postulate that its onset indicates the decline of late stage techno-imperialism into crisis. This essay will commence by first examining the historical background of capitalism’s reliance on military technological development and the imperialist war industry for economic stability. The weaponization of IBM punch card technology by the Nazi regime will then be located as an important historical example of automation technology being made into a massively lucrative industry through its usage for mechanizing serialized execution. The essay then proceeds to further delineate the concept of prototype warfare and how it is currently being implemented by the U.S. Department of Defense(DoD). To conclude, I invoke Achille Mbembe’s theorization of necropolitics to consider whether the automation technologies historically used for exercising absolute power over the mortality of the population are evolving into new sovereign entities under the direction of prototype warfare. This provocation unfolds into a historical comparison between the necropolitical function of automation technology in the context of Nazi Germany and late stage techno-imperialism to argue that prototype warfare signals the inevitable decline of the capitalist system.

Historical Background

Capitalism routinely relies on technological and scientific advancement to further develop its productive forces. It is noted by Marx that when the advancing forces of science, technology and economic growth stagnate, revolutions occur as a means to remove the barriers inhibiting social progress. (Engels and Marx, 1932) World War II incited the introduction of new production techniques geared for purposes of war. Many technological innovations in aircraft manufacture, medicine, nuclear energy and telecommunications were born out of the realization of their value as a means of advancing wartime industry and military power. The military industrial complex continues to play a massive role in the development of global productive forces due to the state control leveraged towards funneling innovative research into projects focused on military initiatives (Gottheil, 1986). Global military spending totaled 1.981 trillion dollars in 2020 (Silva, Tian and Marksteiner 2021). Reich and Finkelhor (1970) posit that without militarism, the entire capitalist economy would return to the state of collapse it experienced prior to its rehabilitation by the second world war. Military production sustains the modern capitalist economy because its produced commodities are designed to fulfill the insatiable demands of war, which is waged relentlessly with no apparent end (1986).

Of the array of corporations that emerged from World War II having accrued billions in profit and expanded into global monopolies, IBM stands out due to the impact of an extensive business partnership held with the Nazi state. (Black, 2001) The Nazi's employment of automated information technology demonstrates its susceptibility to both adapt to and propagate a genocidal authoritarian agenda. Directives for advancing the functionality, processing power and data storage faculties of IBM’s calculation machinery were driven by the Third Reich's homicidal aim of identifying and destroying the lives of the Jewish people and those deemed undesirable to the fascist regime’s construction of an Aryan society. The programs curated by IBM personnel had to be designed to not only tabulate the personal information and assets of every individual in Germany, but to systematically map and sort citizen identities according to Nazi approximations of Jewishness, ethnicity, disability, neurodivergence, homosexuality and political disobedience. Data tabulations geared to extend the Nazi regime's war effort were orchestrated in a multi-tiered procedural apparatus in surveillant pursuit of tracking and coordinating the movement and location of every person, resource, livestock, artillery, ammunition, tank, vehicle, train and piece of currency in occupied German territory. (Black, 2001) The severity of the abuses inflicted by the regime's warmongering practices was accelerated by their fixation on optimizing the efficiency, order and systematization of the fascist political project through mechanization. Every stage of the Nazi’s operation was reliant on the equipment and technical expertise of IBM and the alarming expediency with which the holocaust was executed was due to the multi-territorial statistical analysis applied to the regimentation of the genocidal campaign through computation. Though guilty of profiteering off of the Nazi holocaust, IBM evaded culpability by way of the political protections offered to such a powerful corporation by the U.S. government. The spectrum of political and military advantages to be gained through IBM information technology lead to its adoption by the U.S. military for planning and conducting war strategies. (Black 2001)

Prototype Warfare

The onset of the fourth industrial revolution, characterized as the next stage of the digital/information age has prompted a new revolution in military production dubbed “Prototype Warfare.” The concept of prototype warfare was developed in the 1990s and appropriates language found in complexity and information theory to articulate how the military can strategically yield technological advantage in the information age (Hoijtink 2022). Prototype warfare seeks to neglect the methodical mass production of well tested and refined ammunition, weaponry, and vehicles that reflected military industries of the past. Instead, prototype warfare proposes to use active battlegrounds and real-time operations as testing sites for the myriad of experimental, Artificial Intelligence (A.I.) enabled, military technologies being manufactured at small scales with unproven capabilities and functions. On par with the adoption of terminology from information theory, the military envisages a ‘decentralization’ of mass coordinated operations to effectively integrate the proliferation of A.I. enabled devices into a network that lacks a central point of weakness. Prototype warfare implies that battlefields are being situated as techno-scientific laboratories, platforms for the experimental interaction of A.I. assisted sensors, satellites, weapons systems, autonomous robots, unmanned vehicles and human life (Hoijtink 2022). Each of these actors will be poised as variables in risk intensive research practices that will necessitate an increased tolerance for failure by ground operation personnel. The impetus to forgo former standards of battlefield readiness and try out premature technologies on active battlefronts is largely driven by the mounting pressure that the international A.I. arms race has placed on the U.S. to outcompete political rivals China and Russia in its struggle to uphold technological supremacy and thus, political dominance on the global stage. (Sandels 2020)

The concept of utilizing prototyping and experimentation practices within the domain of war was first declared as a central aim of the Department of Defense as part of the Third Offset Strategy initiated in 2014. The primary objectives of the Third Offset Strategy are to preserve U.S. military dominance within the field of A.I. and to further hone in on the research and development of robotics and system autonomy, miniaturization, big data, and advanced manufacturing through strengthening collaborative relationships between the U.S. military and innovative private sector enterprises. (Fiott 2016) Closely following the announcement of the Third Offset Strategy came the establishment of the Defense Innovation Unit (DIU) in 2015 which seeks to accelerate the military's adoption of commercial technology and to rapidly prototype and field advanced commercial products that address national security challenges. The DIU was designed to evade rules and regulations customary of the defense acquisition process by leveraging the Other Transactions Authority in order to contract out prototypes in as few as 60 to 90 days. (Kuykendall, 2017) The launching of Project Maven in 2017, funded by the Algorithmic Warfare Cross Functional Team, was referred to by director Lieutenant General John Shanahan as the beginning of prototype warfare. In accord with the Third Offset Strategy’s goal to foster new and deeper relationships with the private sector, Project Maven sought the expertise and resources of Google in its quest to use A.I., deep learning, and computer vision algorithms to detect, classify and track objects within Full Motion Video Images. Though Google has ceased work on Project Maven since its contract with the Department of Defense (DoD) expired in 2019 due to mass resistance and backlash from employees, the project remains in operation under the control of the National-Geospatial Intelligence Agency. (Strout 2022) Project Maven embodies a robust vision for the use of A.I. in warfare, with further aims to extend the application of A.I. directed surveillance across many forms of data exploitation including Enemy Captured Material, Acoustical Intelligence, Overhead Persistent Infrared program and Public available information. The ultimate outcome of the project will be to outfit tactical UAVs with Automatic Target Recognition (ATR) capabilities, an effort which boasts the ability to reduce the kill chain decision making process from 20 minutes to 20 seconds by replacing human cognition with A.I. (Office of the Secretary Of Defense 2019). The overarching purpose of relegating the kill chain to A.I. enabled machine processes is to increase the efficiency of wartime operations by reducing the cost and time necessary for target identification and execution, and subsequently to increase lethality. Advances in automated weapons systems which are designed to maximize the death rates of those designated to be enemy combatants, simultaneously propel an increase in profit margins accrued by the arms industry, as defense contractors profit directly off of every bullet and missile fired.

Necropolitics & Imperialism In Decline

In a paper titled Necropolitics, author Achille Mbembe lays a theoretical foundation for necropolitics/necropower as an expansion of Foucault’s conception of biopolitics/biopower which seeks to account for the contemporary methods of execution through which political bodies exercise sovereignty as a practice that is ideologically veiled in the operation of war. Mbembe’s construction of sovereignty disposes of its typical connotations with struggles for autonomy to focus on figures of sovereignty whose primary objective is the generalized instrumentalization of human existence and the material destruction of human bodies and populations. (Mbembe, 2003) This construction works to effectively define the sovereign figure as one who maintains the right to kill. When scaled to the level of state power, the right to kill is inflated into the authority to exercise control over the mortality of a population at large. The Nazi state is widely recognized as the epitome of biopolitical sovereignty due to a core function of its political operation being to organize the mass execution of the Jewish population.

Mbembe notes that it is argued by a number of analysts that the material premises of Nazi extermination are to be found in colonial imperialism on the one hand and in the serialization of the technical mechanisms for putting people to death on the other. Mbembe proceeds to reference how the gas chambers and ovens were the result of ongoing processes to dehumanize and industrialize death. He explains how through mechanization, serialized execution was transformed into a purely technical, impersonal, silent and rapid procedure. However, Mbembe does not discuss the tantamount role that the data tabulations performed by IBM punch card technology played in turning genocide into a mechanized process. (Mbembe 2003) The industrialization and mechanization of this genocide can only be explained by the methods through which punch card technology sorted through billions of bits of data representing the demographics of the entire population and systematically marked millions of individuals for death based on the Nazi’s classifications of who should live and who should die. Without punch card technology, the Nazi genocidal project would have arguably been of nominal scale if bound to the limitations of manual data processing. The Nazi regime’s historical positioning as the ultimate example of biopower is largely due to the efficacy with which they utilized automation technology as a hyperextension of sovereignty. I would further contend that the use of punch card machines to orchestrate genocide marks the earliest example of biopolitical sovereignty being outsourced to technology.

The production of UAVs equipped with A.I. enabled Automatic Target Recognition capabilities as a means of determining who is an adversary that will be killed by the state and who will be allowed to live distinctly echoes the employment of punch card technology as a tool for automating genocide. Autonomous weapons systems possess a historical parallel to the punch card machines designed for the Nazi state that is unmatched by other forms of technology engineered for war because they were both developed with the explicit goal of applying automation technology to the process of mapping a population into politically contrived classifications of who will live and who will die. Automatic Target Recognition acts as a contemporary reformulation of biopolitical sovereignty being hyperextended by and outsourced to technology. The development of A.I. programs designed for the purpose of determining who the state will kill provokes one to question if A.I. is being endowed with a novel form of biopolitical sovereignty and what this implies about the nature of the state it is being produced by. The introduction of prototype warfare as a method for revitalizing the military industry as a profitable extractive force signals that the viability of late-stage techno imperialism has reached a place of uncertainty. Late-stage techno imperialism now exists amidst a backdrop of burgeoning ecological catastrophe, growing social crises and heightening international political tensions. (Foster, 2019) The urgent need to accelerate military technological development through the reterritorialization of the battlefield as a laboratory for unstable autonomous weapons systems points to the global conditions which are pushing the dominance of an exploitation based socio-economic system to a place of greater instability. Positioning technological innovation as the driving motivation for military invasion serves as a prognosis for the threats that circumstances such as dwindling natural resources and the economic and military advancement of rival political powers pose to the dominance of western imperialism. In Marxist literature, Fascism is theorized as the transformation of a collapsing capitalist state into an authoritarian regime that aims to preserve the prevailing economic order by exacerbating the exploitation and persecution of marginalized groups and vastly expanding imperialist conquest as a source of financial gain. (Kawashima 2021) The conversion of Germany into the Nazi state exemplifies a liberal democracy that resorted to fascism as a means of fortifying the capitalist system when confronted with economic ruin. The Nazi state re-stabilized the failing German economy through dehumanizing the Jewish population into an extractive resource and establishing a military industry that was fueled by the destruction of human life. Contemporary Marxists speculate that late-stage imperialism will turn to neo-fascist tendencies in response to the decline of its extractive forces. (Foster, 2019) Prototype warfare seeks to create a profitable framework for autonomous weapon production by reconfiguring military invasion into a perpetual technological experiment which renders human life into an extractive resource.

References

Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation.

Engels, Friedrich and Karl Marx. 1932. The German Ideology. The Marx-Engels Institute.

Edwards, Paul. 1997. The Closed World: Computers and the Politics of Discourse in Cold War America. The MIT Press.

Finkelhor, David and Michael Reich. 1970. Capitalism and the Military Industrial Complex: The Obstacles to Conversion. Review of Radical Political Economics. 2:4.

Fiott, Daniel. 2016. Europe and the Pentagon’s Third Offset Strategy.The RUSI Journal, 161:1, 26-31.

Foster, John Bellamy. 2019. Late Imperialism. Monthly Review, 71:3.

Gottheil, M. Fred. 1986. Marx versus Marxists on the Role of Military Production in Capitalist Economies. Journal of Post Keynesian Economics, 8:4, 563-573.

Hoijtink, Marijn. 2022. “Prototype Warfare”: Innovation, Optimisation, and the Experimental Way of Warfare. European Journal of International Security 7, no. 3, 322–36.

Kawashima, Ken. 2021. Fascism is a Reaction to Capitalist Crisis in the Stage of Imperialism: A Response to Ugo Palheta. Historical Materialism.

Kuykendall, Roger. 2017. Defense Innovation Unit Experimental (DIUX): Innovative or Excessive?. Air War College, Air University.

Office of the Secretary Of Defense. 2019. PE 0307588D8Z: Algorithmic Warfare Cross Functional Team, Budget Item Justification. Unclassified Document. https://www.dacis.com/budget/budget_pdf/FY20/RDTE/D/0307588D8Z_189.pdf

Marksteiner, Tian, and Dr. Diego Lopes da Silva. 2021. Trends in Global Military Expenditure, 2020. SRPRI.

Mbembe, Achille. 2003. Necropolitics. Public Culture, Volume 15, Number 1, Winter 2003, pp 11-40, Duke University Press.

McElroy, Erin. 2019. Data, dispossession and Facebook: techno-imperialism and toponymy in gentrifying San Francisco. Urban Geography, 40:6, 826-845.

Sandals, Carlos Miguel Branco. 2020. The beginning of Artificial Intelligence arms race: A China-U.S.A. Security dilemma case study. Universidade de Évora, http://hdl.handle.net/10174/28613.

Strout, Nathan. 2022. Intelligence agency takes over Project Maven, the Pentagon’s signature A.I. scheme. C4ISRNET, Intel/GEOINT.

03 F*cking with the Virtual

Marilia Kaisar 1,
1 PhD Candidate, University of California Santa Cruz, Santa Cruz, CA, USA,
mkaisar@ucsc.edu

Abstract

How do bodies incorporate networked technologies in their sexual experiences? F*cking with the virtual looks at “cybersex” from the 90s and early 00s to discuss how it has materialized through contemporary commercial sexual technologies: interactive sex toys, VR porn, and dating apps. Using a lens of affect theory, the three cybersex technologies at the center of the essay indicate a move in modes of interfacing: from the visual/textual to the immersive and finally to the interactive experience. In the early 2000s, cybersex was imagined as an immersive and mediated sexual experience facilitated by technological gadgets and wires. Those technologies have influenced cybersex technologies and their design today. This paper offers a brief survey of the history of cybersex technology, considering how to use affect theory and modes of interfacing to consider what cybersex can tell us about our past, present, and future intimate relations to technologies.

Keywords

Cybersex, Technology, Embodiment, Desire, Sexuality, VR Porn, Teledildonics.

Affect and virtuality in cybersex

What role do the body, embodiment, and sensation play when we have sex online? This paper explores instances where “cybersex,” as imagined in the 90s and early 00s, has materialized in contemporary commercial sexual technologies: interactive sex toys, VR porn, and dating apps. The three cybersex technologies indicate a move from the visual/textual (sexting) to the immersive (VR porn) and finally to the interactive (teledildonics), tracing not only the changes in the cybersex experience but also how contemporary technologies are directly influenced by techno imaginations of the past.

While, terms like affect, the body, cybersex, and the virtual have become slippery surfaces in theoretical encounters. I define cybersex as sex with and through technology, an erotic encounter that utilizes some form of technology to occur.  I define affect,as sensation, an intensity experienced by both body and mind together following a Spinozist legacy, introduced by Gilles Deleuze and elaborated further by Brian Massumi and by critical studies during what Patricia Clough defines as the “affective turn” (Clough 2007, 1). It is crucial to think of the virtual as it relates to technology, cyberspaces, and VR/AR technologies through a lens of affect in an attempt to bring embodiment back into the technologically mediated landscape.

Intensity, sensation, and vibration become central in this approach as they allow sensation and affect to circulate between technologies and bodies, human and non-human agents.Expanding on this idea, affect is experienced on the body and through the senses; it is a visceral experience where mind and body are interrelated unity. Affect is a force, intensity, or flow that penetrates the body and increases or decreases its power and capacity to act. (Spinoza 2005, 70).  Affect can be a valuable theoretical tool when considering cybersex because it can account for this centrality of sensation and the multiple transformations of desire, lust, stimulation, and vibration that moves between bodies and technologies.

Interfacing in Cybersex

In cybersex encounters, the human body appears entangled with different technologies to reach outward and extend, seeking another body, skin, vibration, message, or encounter. Thus, the body meets  first with different technologies and each of them carries its own mode of interfacing. The interface or mode of cybersex shapes the experience of cybersex.  For Alexander Galloway, interfaces can be screens, windows, keyboards, sockets, holes, or channels. It is only when they are in effect that interfaces can materialize and reveal what they are. In this way, Galloway defines as “the interface effect” the process of mediating thresholds of self and world (viii). The mode of interfacing between the screen/body determines how mediation operates and affects the online experience. How do we interface with technologies to connect intimately with each other?

In the earlier techno-imaginaries of cybersex (fig.1), all modes of interfacing work in tandem and coexist to create a united experience, an immersive illusion of interactivity. Today, they have materialized not only as distinct technologies but as distinct modes of cybersex. Cybersexual interaction has moved from textual and audio communication to visual and audiovisual, to the promise of the immersive allowed in VR porn, to arrive into today's wireless interactive Bluetooth-enabled sex toys. The relationship between the body and the interface technology (whether a laptop, phone, VR headset, sex toy, or both) creates a bodily relationship in physical space.

The interface determines the rules of the interaction; it defines the affordances and the allowances of the communication, and how users relate to both the “experience” and the “screen.” In The Posthuman, Rosi Braidotti explores how the relationship between human bodies and technological others moves between intimacy and intrusion (Braidotti 2013, 89). Braidotti’s notion of “becoming machine” sees bodies and machines as intimately connected through simulation and mutual modification within a circuit of representation-simulation–biomediated bodies. The different modes of interfacing in cybersex structure cybersexual assemblages where affect transforms and reshapes itself as it moves between human and non human organs, cables, code, wireless connections, and more.

Following Braidotti’s ideas,  a focus on the technologies of cybersex and how they come to touch with the human body allows us to move from disembodied cyberspace into an embodied sensorium where the body becomes the center of the experience. Cybersex crafts a unique setup where the body becomes vulnerable in front or next to technology while also disguised behind a nickname, a profile or VPN address. Still, there is a tenderness and a vulnerability the technology that controls and determines the modality and shape of our interaction.

Virtual Bodies in Cyberspace

To think of the body in cybersex, we first have to think of the body in cyberspace and Virtual Reality(VR). Who is the subject that is fucking with the virtual; sits in front of the computer;  holds the smartphone swiping and sexting; takes nudes standing in front of a mirror; wears the VR headset to be immersed in a sexual, mystical journey in cyberspace; connects via Bluetooth connected sex toy to a lover far away. The mediated landscape of being online, and the technological tools that one engages with to access “cyberspace” have been completely transformed with the emergence of new technologies. Thus, our contemporary online spatial experiences and communities might not be similar to the cyberspace imagined in science fiction.

In The War of Desire and Technology in the Close of the Mechanical Age, Sandy Stone (1995) sees cyberspace as a social environment, one that allows interactions not only between people but also between humans and machines. Those interactions between humans and machines and humans to humans through the machine, allow a new identity to emerge. For Stone, cyberspace is “a space of pure communication, the free market of symbolic exchange–and as it soon developed, of exotic sensuality mediated by exotic technology.” (Stone 1995, 33)

Cybersex 2: Then and Now

In the 1990s, magazines like Mondo 2000 (1984-1998) and Future Sex (1992-1994) emerged as an intersection of cyberculture and sexual positivity influenced by cyberpunk fiction writers like William Gibson and Bruce Sterling. In their pages, cybersex was imagined as the future of sex, a combination of sex, drugs (mainly LSD), and future technology that create an enhancing simulation. A prevalent issue when thinking of cybersex in films, magazines, and cyberpunk discourse was how the sensations experienced by the virtual body in cyberspace could be felt on the physical body of the participant. Solutions come in many forms: Howard Rheingold imagines teledildonics (1992, 345), Kroker imagines electric flesh (1993), and films like “The Lawnmower Man” (1992), “Brainstorm”(1983), and “Live Virgin” (2000) envision cabled bodysuits, machines, and architectures that engulf the body so that they can mediate sensations from the cyberspace to the body that stands with/in or on the side of the device to experience cybersex.

Figure 01.REACTOR, Future Sex Issue 2 Cover, Magazine Cover, 1992

In 1992, on the cover page of the second issue of the magazine Future Sex, we see two naked bodies, male and female, enhanced with different technological gadgets and wires: thongs with strap-ons, arms replaced with robotic arm suits, a wired helmet, and goggles connected with wires. Following the magazine's headline “Strap in, Tweak out, Turn On!” the two models are enhanced with technology to experience “Cybersex 2.” Michael Saenz and Reactor speculate how fabrics, sensors, immersive 3D technology, and tactile data would create erotic simulations without the dangers of human interaction (Saenz 1992, 28). The lovers' encounter occurs in cyberspace; the Virtual Reality headset offers access to cyberspace; the sensations experienced during cybersex are felt on the body through teledildonics and bodysuits. In this image, cybersex is imagined as an immersive and mediated sexual experience facilitated by various devices. This early image of cybersex highlights the centrality of the body and how it interfaces with sensory network technologies. The body becomes a mediator; the technologies allow the body to feel through sensory stimulation, an encounter in virtual cyberspace. Furthermore, it is the technologies depicted in this techno-imagination that are now thirty years later defining the future-present of sexual technologies. How have those fantasies of cybersex been realized today through specific technologies?

Screen to Screen and Peer to Peer: Textual, Visual, Audio

The simplest modality of cybersex is fostered as peer-to-peer interaction that takes place using text, sound, or images. From the phone to the computer to the smartphone, from the AOL chatroom to Second Life, cybersex seems to be experienced in writing (textual), in voice (auditory), in the image (through an exchange of images and or representational sexual acts within videogame). Technology acts as a mediator and interface between the two desiring subjects. The technology stands in-between them, creating an excitement of participating in a subculture that is dark, exciting, forward, and futuristic.

In “Romancing the Anti-Body: Lusting and Longing in (Cyber)space,” Lynn Hershman Leeson discusses how by default, cyberspace requests the user to create a mask, structuring a computer-mediated identity that might correspond or not correspond to reality. For Leeson, as users are asked to redefine themselves through names, profiles, icons or masks they are also determining their audiences spaces and territory. In this way “anatomy can reconstituted.” (Hershman-Leeson 1996, 325)

Research on cybersex often discuss the potentialities of assuming avatar anti-bodies online, well-crafted personalities that allow for each physical body standing in front of the computer to have multiple corresponding bodies in cyberspace. Users in Compu-Serves having compu-sex, users behind phones having phone sex, and users of Second Life, using their avatars to have cyber sex, share some similar experiences. Those  experiences share the element of crafting new identities and creating desire-ing and desirable bodies. In the environment of the contemporary dating app, texting and exchanging images become a central element of communication. This time the smartphone touch screen becomes the central interface with which the body engages. This text and image-based sexual communication seem almost like a descendant of the anonymous space of the chatroom that proliferated in the early 00s.

Immersed and Strapped in: From VR Sex to VR Porn

In Virtual Reality , Howard Rheingold (1990)  imagines virtual reality sex as a collaboration between virtual reality and teledildonics. Rheingold imagines that through the marriage of virtual reality and telecommunications networks, teledildonics would allow sexual stimulation to occur by reaching out and touching other bodies in cyberspace. By incorporating a lightweight bodysuit and 3D glasses (VR headset) one could have a realistic sense of visual, auditory and haptic presence (346), what Rheingold calls an “interactive tactile telepresence” (348). Contemporary VR porn utilizing a VR heaset and a stroker pairing the porn strokes to the toy through AI, is the closest thing we have to this immersive vision of cyber sex.

In the Foreword of Hard Core, Linda Williams (1989) positions pornography as a genre that moves the viewer's body in a particular way. Contemporary VR porn creates a spectacle of visual pleasure where the contemporary stroker allows the viewer's body to be moved in the rhythm of porn, making any VR porn experience interactive. The headset enables immersion, and the stroker promises that the immersion is not only felt on the body but is perfectly paired with what you are viewing: the flow of the porn is the flow of the stroker. VR porn technology promises you can  “Feel what you see” in real-time.

At the same time, VR porn structures a particular form of gaze, an embodied male gaze. There are two ways of looking in VR porn. In solo films and girl-on-girl films, the camera is placed at a safe but close distance to the spectacle, creating a rather voyeuristic gaze of someone who is there looking at the scene from nearby. The 360 camera creates a fishbowl effect; at the center of the scene lies the action/spectacle of the female pornstar(s) who is the center of attention. Mainstream VR porn films place the viewer in the center of the action as an active participant embedded in the body of the male pornstar in the scene.

What is striking in mainstream VR porn is how the gaze is pinned on the body of the male performer, who wears the 360 camera. There is a body that limits the gaze. The particularities of this body dictate the gaze, how it is elevated, and how to participate in the scene. The hands of the male pornstar mainly stay on the side, only rarely participating in the sexual act. The phallus of the male pornstar becomes the interactive element, the point of touch between the spectacle and the immersion, as this body part gets stimulated by the stroker.

Interacting: From Teledildonics to Bluetooth Sex Toys

Beyond the immersive experience of VR porn, teledildonics today are also marketed toward couples in long-distance relationships. Bluetooth wireless remote control sex toys for couples in long-distance exemplify the concept of virtual sex, allowing couples to “feel each other” when they are apart.  In advertisements and representations, Bluetooth sex toys are often advertised as a stand-in or proxy for a partner in a long-distance relationship. In the companies’ narratives, wireless sex toys can replace sexual experiences with a video call and a pair of interactive sex toys.  Bluetooth sex toys are both toys but also a complex technology that allows affect, desire, and data to circulate.

Cybersex using interactive sex toys facilitates encounters between human and non-human sexual organs, wireless and Bluetooth connections, smartphones, screens, and satellites. The promise of sex across distances is enabled by the virtual but only through digital technologies: smartphones, Bluetooth-connected sex toys, modems, and more. In "Technology and Affect: Towards a Theory of Inorganically Organized Objects,” James Ash defines inorganically organized affect :

an affect that has been brought into being, shaped, or transmitted by an object that has been constructed by humans for some purpose or another (Ash 2015, 87).

Ash argues that there should not be an ontological distinction between organically and inorganically organized types of affect. Still, it is necessary to understand how affect travels and changes from matter to matter, from objects to waves to humans. At the end of the article, Ash nods towards how an object-centered account of affect can decentralize the human to think about affective design, objects, encounters, and their afterlives. This expansive theorizing of affect can allow us to better consider this relationship between bodies and machines in cybersex. Bluetooth sex toys viewed through a lens of affect and intra-action can allow us to think about how we embody and relate to technology to argue for the need to consider intimate affective assemblages constituted by humans and technological others or non-humans.

Cybersex and beyond

Our technologies have been completely transformed, but our futuristic cybersex fantasies look the same.  As bodies interface habitually with devices that connect to the internet and store data in the cloud, our ideas of cybersex remain connected to their historical precedents. It becomes urgent, to explore the imaginations of the past next to the representations of the present to consider this concept of feeling the virtual or sensing the other, connecting intimately through technology to feel each other.

Acknowledgments

I would like to thank the Keneth Cordray GROW Summer Dissertation Fund, the  Arts Dean's Fund For Excellence and Equity scholarship, and the UCSC, Film and Digital Media Department Summer Research Award for making my participation in the School of X possible.

References

Ash, James. 2015. ‘Technology and Affect: Towards a Theory of Inorganically Organised Objects’. Emotion, Space and Society 14 (February): 84–90. https://doi.org/10.1016/j.emospa.2013.12.017.

Braidotti, Rosi. 2013. The Posthuman. Cambrdige, UK; Malden, MA: Polity Press.

Clough, Patricia Ticineto. 2007. ‘Introduction’. In The Affective Turn: Theorizing the Social, edited by Patricia Ticineto Clough and Jean Halley, 1–34. London; Durham, NC: Duke University Press.

Galloway, Alexander R. 2012. The Interface Effect.Cambridge, UK ; Malden, MA: Polity.

Kroker, Arthur. 1993. Spasm: Virtual Reality, Android Music and Electric Flesh. Edited by Bruce Sterling. New York: St. Martin’s Griffin.

Hershman-Leeson, Lynn. 1996. Clicking in: Hot Links to a Digital Culture. Seattle: Bay Press.

Mondo 2000. 1989. Mondo 2000 - Issue 01 (AKA Reality Hackers Issue 07). The Internet Archive. Accessed May 2023: http://archive.org/details/Mondo.2000.Issue.01.1989.

Saenz, Mike. 1992. ‘The Cybersex 2 System’. Future Sex-Issue 02.. Kundalini Publishing. The Internet Archive. Accessed on May 2023: https://archive.org/details/Future.Sex.Issue.02

Stone, Allucquère Rosanne. 1995. The War of Desire and Technology at the Close of the Mechanical Age. Cambridge, MA, USA: MIT Press.

Rheingold, Howard. 1992. Virtual Reality: The Revolutionary Technology of Computer-Generated Artificial Worlds - and How It Promises to Transform Society. New York: Simon & Schuster.

Spinoza, Benedict De. 2005. Ethics. Edited and translated by Edwin Curley. Penguin Classics. London: Penguin Books.

Williams, Linda. 1989. Hard Core: Power, Pleasure, and the Frenzy of the Visible. Berkeley, CA: University of California Press.

04 Melodia Atomizacji

Arthur Kuhn 1,
1 Artist
a.kuhn@kuhnhestale.fr

Abstract

This paper is an introduction to an ongoing transmedia art project. This project will address Machine Learning as an artistic medium. This paper will provide a quick overview of what the project aims at talking about, and why I chose to do it the way it does, before more precisely dwelling on the question of the Machine Learning’s ontology, mobilizing mainly the insights of Brian Cantwell Smith, a computer scientist and philosopher, and Jacques Derrida.

Keywords

Artificial Intelligence, Machine Learning, AI Art, Counterfactual fictions, Avant-gardes

Introduction

Melodia Atomizacji is an ongoing artistic project comprising a series of installations and an artist book. Both sides of this project intend to immerse the audience in the work and life of Sena Plincski, a Polish artist from the early twentieth century. Soon enough, though, one might realize that all of Sena’s paintings are presented through a series of strangely similar photographs, or that some of the people he wrote letters to were born twenty years after his disappearance.

Figure 01.Last known portrait of Sena Plincski, somewhere near San José, California. Circa 1926

This might have to do with the fact that Sena Plincski is entirely fictitious. I created this character, gave him a biography and a corpus of artworks, enmeshing him in a web of references that define his life and production as a negative of other, historically real, people. Sena is a fiction used as a front to explore, from a hands-on perspective, the techniques of Machine Learning (ML). More specifically, as will be addressed in this paper, this project is a take on what Brian Cantwell Smith refers to as ML’s ontology:1 i.e, what kind of representation of the world and its inhabitants is mobilized in these technologies. I will also provide some diegetic elements about Sena Plincski as context, as well as a brief explanation of why I decided to use such a narrative tool.

Figure 02.Photomontages of Sena Plincski’s paintings

The True Story of an Imaginary Artist

Because of its use of fiction, Melodia Atomizacji is ever-more layered. At the foundational level, there is me, an artist willing to experiment with AI image generation. Immediately piling up, is a need (its reasons being explained in the next part of this paper) for a figurehead that would embody ML functioning through his aesthetics preferences. But then, another layer is added, because not only does this figurehead need to have a corpus that visually reflects what is investigated, it also needs a biography; or, more precisely, a lore. In video-games, a lore is defined by contrast with the main narrative. It is all the world-building asides: all ancillary and accessory elements accessible through, for example, environmental storytelling or object descriptions.2 Here, I choose this term over biography because Sena does not have a life in itself. He exists through his connection, his fictitious symmetry, with other people’s works and lives. Even his whereabouts, and the time period during which he lives, serve as metaphorical clues about, first, his non-reality, and second, how and why I came to construct him as he is.

In this regard, I stay indebted to Maria Delaperrière, for her article on polish poetry and its relationship with the French avant-gardes of the early twentieth century3. It is in her paper that I learned about Tytus Czyzewski, one of the most well-known figures of the Formis’ci (Formist) movement. A painter and a poet, Czyzewski notably published a book called Zielone oko (Green Eye), that contains the poem Melodia Tlumu (Melody of the Crowd). This poem is one of the defining elements of this project because it reads like a proto-Attempt at Exhausting a Place in Paris, by Georges Perec,4 a text where the writer lists everything he is seeing, in an apparently disconnected fashion, until the words start to lose meaning and their written form becomes the material itself.

Figure 03.Portrait of Tytus Czyzewski. Photography: archives of the Museum of Art in Lodz

But what interested me even more is the fact that Czyzewski, later on, said that his goal wasn’t to devoid the words from their meaning. It was quite the opposite, as a way to show how he, a polish exile in a foreign country, tried to connect to this place but kept getting devolved by all of what he was listing: the sound of the streets, the sight of the people passing by, etc. Opposing, then, the ML tendency that Sena is to embody, to treat any cultural or meaningful content as statistical probabilities, without regard for their meaning.

Figure 04.Tytus Czyzewski, Melody of the crowd, my translation.

So Sena began as an anti-Czyzewski, then I decided he would go on to live in California, first Los Angeles and then what would become Silicon Valley, dabbling into occultist circles and making a reputation for himself by forging fake medieval engravings – which adds another layer to the project, with Sena creating fictions inside his own fiction – and opposing Aleister Crowley upon questions of decency. That’s where he would live until his disappearance on July 7, 1930, giving us access to all the letters he ever wrote in his life but never sent to the people they were destined to.

Figure 04.A selection of Sena’s paintings.

The Vas of Tuples, by G30rg35 Br4K: Why turn AI into cubist painters ?

Stepping out and aside from the fiction, I’d like to briefly give a few reasons why this project came to be as it is now. This project started in 2022, as I entered a program centered on “Artificial Imaginations” – a play on Artificial Intelligence – directed by Gregory Chatonsky and Yves Citton. As we started experimenting with Dall-e or Stable Diffusion, I grew frustrated. These text-to-image tools seemed to put me in too much of a directing position: describing what I would like to see to the machine that would produce something I had to deem close enough or not. As an artist, I tend to – and aim at – working with systems. Webs of information and references set in motion by protocols and scripts. What interests me, in focusing on digital computational tools, is their ability to overwhelm me in their complexity and outputting ability. Because I can no longer predict what will be outputted, yet I conserve some leverage over the program, the piece comes to be constructed by the system itself as much as by me. Focusing on serendipity and iteration through a computational back-and-forth. The playground of Dall-E offered me the exact opposite of a praxis, which, to sum it up, meant that I couldn’t care less about the images produced.

Yet, I still found exciting the counterfactual potential of these technologies; that they were very promising fictitious archives providers. Hence came the desire for an encompassing fiction, something that would take a third seat between the AI and me. Something I could make these images interact with, as a way to go beyond this binary evaluation of “This is/isn’t what I want to see”.

The second reason, once I decided to go through the extra steps of building a character and giving it a biography, is that fiction is well-suited to both a) a transmedia narrative, and b) a fragmented form. It organizes what can seem disparate, instrumentalizes the various materials within a regulatory framework. Which plays well with one of my main stylistic inspirations : the creepypastas.

Figure 05.Shaurya Thapa, “10 Internet Creepypastas That Are Still Seriously Scary”. 2022. ScreenRant, headline image. A portrait of a young Sena Plincski has been added on the right.

Creepypastas are short stories narratively lacking. These fictions, typical of post-internet writing, are woven in the absence of information around scattered elements, often presented as if found, and require a participatory reading. This particular format allows me to both extend my project across several media, but also to work on blurring the boundary between archive and creation, playing with the counterfactual potential I just mentioned. As for the fragmented form, it means to me that every element of the piece, every bit of image, installation, objects, and so on, doesn’t have to carry the whole meaning of the piece by itself, that it is the fiction, inserted in the space between each and every bit of visual, that is responsible for building the overall point. The fiction of Sena serves as the between-the-images.5

Lastly, the fiction that is Sena Plincski, his dipping into occultism and fight against meaning, allows me to shift the framing of this project away from what would be expected. My intuition being, paraphrasing Simon Penny, that we are now so much engrained with digital technologies, that they constitute the very images we use when talking about them.6 So, it is my firm belief that, in an artistic context, the investigation of such technologies is helped by reframing them out of their ordinary context. And, to this end, I found that the formist movement of the early twentieth century in Cracow, as well as the Hollywood-occultist craze of the Roaring Twenties, is quite a radical displacement.

Computational Microwave Background

Finally, regarding this question of ML’s ontology, as it will be addressed in this project. The crux of my argumentation lies in the opposition between the previous attempts at AI (What John Haugeland nicknamed Good Old Fashioned AI, GOFAI)7 and contemporary ML-based AI. Historically, in a Cartesian infused-vision of what it means to be intelligent, a lot of the effort was directed towards reasoning upon “clear and distinct entities, exhibiting defining properties and specific behaviors”.8 In response to the failure of this approach, ML was built upon not trying to represent (as in constructing a formal definition of) what it is working upon. More precisely, it uses statistical induction of proximity probabilities, encoding these statistical probabilities as weights in the connection of small and simpler calculating units; and, it needs to be noted, by removing any semantic value to these connections. Meaning that the machine itself does not know, can’t know and must not know what it is working. As a paradoxical example, facial recognition became frighteningly effective when scientists stopped trying to explain what constitutes a face and what is identity-confirming of a specific face next to another. Explaining it would require to build a formal blueprint of a human face, and all of the problems of GOFAI (mainly the fact that nothing is as well cut-out and defined as we would think) would surface again. For example, how do you explain that my face is primarily recognizable because of its looks, yet its looks will often greatly vary during my life without that ever affecting the fact that it is, indeed, my face.

This approach to intelligence can be summarized as recreating the learning process of new-born children after years of trying to reproduce that of a scientist.9 Meaning that what we are now building, is a capacity to ingurgitate and build connection on top of a humongous amount of pre-conceptual information. Which, as an artist, brings on the idea of creating by precisely not trying to pinpoint what it is you want to create. A precise meaning is not what you are looking for here. Not a constructed point. Not building up a very personal and singular manifestation of your subjectivity through the careful elaboration of a formal manifestation of your idea; but allowing you to go down to this sort of shapeless mass of “everything that has been made or seen”, to try and steer it towards something “never seen before, before you saw it”.10

These quotations belong to a series of texts in which Derrida talks about what it is to create, based on letters written by Antonin Artaud. In these letters, Artaud recalls (and Derrida interprets) how every creation is a betrayal of this “absolute absence, full of all possibilities”, one is compelled to go back to as a first gesture. Both Derrida and Artaud frame creating as mostly silencing any other potential creation. In this context, ML offers us a more direct than ever way of letting that absence flow. Of producing infinitely unique variations of the same in a sort of “ontologically trembling” medium.

Moreover, this atomizing poiesis, this tendency to hollow any meaning out of any cultural content and reduce it to a swarm of numbers to be crunched in amounts humans can no longer fathom, seems to come as a new reflection on the question of the authorship and the presence of the artist in the artwork. ML-based generation distributes the parenthood of the image between many elements.

After all, the dataset11 must be very carefully curated and constructed – and now too massive in scale to be humanly constituted – for the AI model to be able to produce anything convincing. Then, the technical context it is running on censors what you’ll be able to produce, and while ultimately the prompt12 sets it in motion, the stochastic nature of AI means that it is important but can’t be seen as the most defining element. And then, maybe in a deeper fashion than before because there is no intrinsic, intentional representativity here, the image needs an audience to give a cultural meaning to this probabilistic placement of numerical values as RGB layers. Which makes me consider ML as an anti-noospheric view on computation and the human mind.

Conclusion

The noosphere is a philosophical concept, developed by both Jesuit priest Teilhard de Chardin and biogeochemist Vladimir Vernadsky. I’m mainly referring to de Chardin’s interpretation, as it is the one I am familiar with. It is based on the idea that after the geosphere, and the biosphere, comes the noosphere, the sphere of all human cognition, that is being built and perpetually thickened by our collective intellectual activities. What interests me, in this image, is that it conveys the sense of the human intelligence as perpetually going towards more. More ideas, more complexity, more links between more complex ideas, etc. It finds an interesting echo in a lot of what digital computing, through its networking and complexifying aspects, has been about up until this point. But, as comes ML, there is a strange clash of representation in this proposition that actually, the best way to integrate and mimic a “global, planetary consciousness” as Big Tech CEOs still seem to dream about to this day, is through breaking it down to a background, computational noise.

Figure 06.A drawing after one of Sena’s poem.

References

Anderson, Sky LaRell. 2019. « The Interactive Museum: Video Games as History Lessons through Lore and Affective Design ». E-Learning and Digital Media 16 (3): 177‑95. https://doi.org/10.1177/2042753019834957.

Bellour, Raymond, éd. 2012. Between-the-Images. Documents / Documents Series 6. Dijon: Les presses du réel.

Czyzewski, Tytus. 1922. Zielone Oko.

Delaperrière, Maria. 2003. « La poésie polonaise face à l’avant-garde française : fascinations et réticences ». Revue de littérature comparée 307 (3): 355. https://doi.org/10.3917/rlc.307.0355.

Derrida, Jacques. 1967. L’écriture et la différence. Points Série essais 100. Paris: Éditions du Seuil.

Haugeland, John. 1985. Artificial intelligence: the very idea. Cambridge, MA: MIT Press.

Penny, Simon. 2019. Making Sense: Cognition, Computing, Art, and Embodiment. Cambridge, MA : MIT Press.

Perec, Georges. 2010. An Attempt at Exhausting a Place in Paris. Imagining Science 1. Cambridge, MA : New York: Wakefield Press ; D.A.P./Distributed Art Publishers [distributor].

Smith, Brian Cantwell. 2019. The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge, MA London: The MIT Press.

Teilhard de Chardin, Pierre. 1955. Le phénomène humain. Points 222. Paris: Éd. du Seuil.

Thapa, Shaurya. 2022. « 10 Internet Creepypastas That Are Still Seriously Scary ». Screen Rant, 19 november 2022. https://screenrant.com/scariest-internet-creepypastas/.

Footnotes

  1. “I use the term ‘ontology’ in its classical sense of being the branch of metaphysics concerned with the nature of reality and being – that is, as a rough synonym for ‘what there is in the world’.” Brian Cantwell Smith. 2019. The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge, MA London: The MIT Press. P.57

  2. For more information on this notion of lore, and the use of item descriptions as a narrative tool, see : Anderson, S. L. 2019. “The interactive museum: Video games as history lessons through lore and affective design”. E-Learning and Digital Media 16(3), 177–195. https://doi.org/10.1177/2042753019834957

  3. Maria Delaperrière, 2003. La poésie polonaise face à l’avant-garde française : fascinations et réticences”. Revue de littérature comparée 307 (3): 355. https://doi.org/10.3917/rlc.307.0355.

  4. An Attempt at Exhausting a Place in Paris is a 1975 short book by the French novelist Georges Perec. In a methodological writing experiment, it consists of describing all the ordinary, mundane and unremarkable things that usually go unnoticed, observed while Perec sat in Saint-Sulpice Square, in Paris, for a day.

  5. Raymond Bellour, ed. 2012. Between-the-Images. Documents / Documents Series 6. Dijon: Les presses du réel.

  6. Simon Penny, 2019. Making Sense: Cognition, Computing, Art, and Embodiment. Cambridge, MA: MIT Press.

  7. John Haugeland. 1985. Artificial intelligence: the very idea. Cambridge, Mass: MIT Press.

  8. This definition of GOFAI, as well as its ML counterpart, are both extracted from Brian Cantwell Smith, ibid, 28.

  9. Brian Cantwell Smith, ibid.

  10. Jacques Derrida, 1967. L’écriture et la différence. Points Série essais 100. Paris: Éditions du Seuil. P.15. My translation.

  11. Designing the ensemble of images and metadata used to train a neural model.

  12. In the case of text-to-image tools, the prompt designates the text the user submits to the machine.

05 Open: A Pan-ideological Panacea, a Free Floating Signifier

Andrea Liu 1,
1 ZhdK (Zurich University of the Arts) 2023-24 Fellow, New York/Berlin/Paris
andrea635@protonmail.com

Abstract

“Open” is a word that originated from FOSS (Free and Open Software movement) to mean a Commons-based, non-proprietary form of computer software development (Linux, Apache) based on a decentralized, poly-hierarchical, distributed labor model. But the word “open” has now acquired an unnerving over-elasticity, a word that means so many things that at times it appears meaningless. This essay is a rhetorical analysis (if not a deconstruction) of how the term “open” functions in digital culture, the promiscuity (if not gratuitousness) with which the term “open” is utilized in the wider society, and the sometimes blatantly contradictory ideologies a indiscriminately lumped together under this word.

Keywords

FOSS (Free and Open Source Software Movement), Linux, open access, Creative Commons License, copyfarleft, Telekommunisten Manifesto

“Open” is a term that has acquired an unnerving over-elasticity, a word that means so many things at times it appears meaningless. A word that originated from FOSS (Free and Open Source Software movement) to mean a Commons-based, non-proprietary form of computer software development (Linux, Apache) based on a decentralized, poly-hierarchical, distributed labor model—“open“ has now radiated its innumerable capillaries into fields as diverse as pedagogy, publishing, activism, party politics, government, science, and more. “Open“ can now mean a rhizomatic social formation that rejects top-down bureaucracy in favor of peer-to-peer network; “open“ as an insurgency against the for-profit publishing industry’s attempt to commodify knowledge (Open Access); “open“ as a Paulo Freire-like pedagogy where students are active creators—not just passive consumers—of knowledge; “open” as a form of direct democracy which rejects representative intermediaries Let‘s take a traipse through the cultural ubiquity of the term “open.“

On Deepdyve (a website giving access to academic publishers‘ books and articles), in the “Journals“ section there are no less than 29 journals that begin with the name “Open“: Open Agriculture, Open Archaeology, Open Astronomy, Open Chemistry, Open Computer Science, Open Cultural Studies, Open Economics, Open Economies Review, Open Education Studies, Open Engineering, Open Forum Infectious Diseases, Open Geosciences, Open Geospatial Data, Software and Standards, Open Health, Open Information Science, Journal of Open Innovation: Technology, Market and Complexity, Open Life Sciences, Open Linguistics, Open Material Science, Open Mathematics, Open Medicine, Open Philosophy, Open Physics, Open Political Sciences, Open Psychology, Open Statistics, Open Systems and Information Dynamics, Open Theology, and Open Veterinary Science. (It is telling that of the 29 journals beginning with “Open,“ not a single one of them is related to art, as art is still largely predicated on orginality, exclusivity, the “aura“ of singular authorship, and therefore resistant to notions of large-scale collaborative, decentralized, or distributed authorship. That being said, one of the few exceptions to this is the Yale University School of Art website surprisingly enough, which is an “open access“ wiki whereby any student of the School of Art can change, add or alter the website: https://www.art.yale.edu/)

Then we have President Obama, signing a “Memorandum on Transparency and Open Government.“ It declared, “My administration is committed to creating an unprecedented level of openness in Government. We will work together to ensure public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government” (Obama, 2009). Obama’s paean to governmental transparency was searingly ironic, as Obama prosecuted more whistleblowers than all the previous U.S. presidential administrations combined. Obama’s adminstration murdered more people with drones in Pakistan, Yemen and Somalia with no congressional oversight, no due process and no trial—more executions in his first year than the entire 8 years of President George W. Bush’s presidency.

But let‘s stop here and take an inventory of the multifarious contexts and valences “open” operates in:

(1) “open” as a rejection of intellectual property/private property

(2) “open” as a quality of a system

(3) “open” as a rhizomatic social formation that rejects of top-down bureaucracy in favor of non-hierarchical, peer-to-peer network

(4) “open” as a form of direct democracy which rejects representative intermediaries

(5) “open“ as an insurgency against the for-profit publishing industry’s attempt to commodify knowledge (Open Access)

(6) “open” as transparency of information

(7) “open” as FOSS (Free and Open Software Movement), a non-proprietary form of software development (i.e. Linux operating system, Apache web server) which radically inverts the notion of “property as the right of exclusion,“ instead reconceptualizing “property as the right to distribute,“ aimed at creating a social structure that expands, not restricts, the resources of the commons

(8) “open” as an innovative/competitive production method to accelerate new forms of capitalist accumulation

(9) “open” as capitalist deregulation

(10) “open“ as a Paulo Freire-like pedagogy where students are active creators--not just passive consumers--of knowledge

It appears #1 is in direct contradiction with #8 & #9— the same word used to apply to a revolt against private property (#1) also means an innovative method to increase capitalist accumulation based on private property (#8 & #9). Not only are there contradictory uses of the term by different speakers, but even within the same speaker, there is a contradiction, resulting in an Orwellian doublespeak whereby the words one utters actually signify the opposite of what one means (Obama). “Open” is the clarion call, the principle that inspires ideologies as contradictory as communitarianism, anarchism, post-Marxist autonomism, & pro-market capitalist neoliberalism (i.e. as both free-market friendly Lawrence Lessig and radical post-autonomist Marxist theorists Michael Hardt/Antonio Negri both hail “open” as the new threshold to a better world):

Most think about these issues of free software, or open source software, as if they were simply questions about the efficiency of coding. Most think about them as if the only issue that this code might raise is whether it is faster, or more robust, or more reliable than closed code. Most think that this is simply a question of efficiency. Most think this, and most are wrong. . . . I think the issues of open source and free software are fundamental in a free society. I think they are at the core of what we mean by an open society (Lessig, 2005).

One approach to understanding the democracy of the multitude, then, is as an opensource society, that is, a society whose source code is revealed so that we all can work collaboratively to solve its bugs and create new, better social programs (Hardt and Negri, 2004).

To trace the point at which “open“ became culturally ubiquitous, it is perhaps useful to turn to Steven Weber’s The Success of Open Source. He explains how open source in computer software development is “an experiment in building a political economy, a system of sustainable value creation“ that rejects the notion of property as exclusion (Weber, 2004). To give a counter-example, Microsoft and Apple define “property“ as exclusion—if you buy Microsoft Windows, you can use it, but you cannot modify it, improve it, or redistribute your own version of Windows to others because of copyright, licenses and patents. Weber explains how “source code is a list of instructions that make up the ‘recipe‘ for software“ and that Microsoft does not release their source code. However, with the invention of the Linux kernel (i.e. operating system) by Linus Torvalds and countless collaborators (1992-94), a new model of software development arose whereby the source code for Linux was released to anyone who wanted to use it, without royalties or licensing fees to the author (Weber, 2004). This gave rise to a modular, decentralized, non-hierarchical (or perhaps what Jonathan Zittrain would call “poly-hierarchical“) model of labor within software development whereby copying source code (free of copyright and patent restrictions) is not only allowed, but is the ontological centerpiece of the entire system.

However, as Weber goes on to show, this very specific set of conditions of the Open Source software labor model have been banalized and rendered unrecognizable in the ongoing indiscriminate mania for “open“:

“A note of caution: As open source has begun to attract broad public attention over the last few years, the term itself has been overused as a metaphor. There are now experiments with an open-cola alternative to Coke and Pepsi, an “open music” registry, an “openlaw” project at Harvard Law School, and any number of “open content” projects to build mass encyclopedias, for example. Many of these are simply “open” forums in the sense that anyone can contribute anything they wish to a mass database..[...] Many of these projects gain their ideological inspiration from the open source process and tap into some of the same motivations. But in many instances these projects are not organized around the property regime that makes the open source process distinctive“ (Weber, 2004).

Regarding my earlier lament that traits from one strand of the Open movement are automatically ascribed to another strand of the Open movement without analysis as to whether this transfer even makes sense, John Wilbanks (former Executive Director of the Science Commons Project at Creative Commons) addresses this concern regarding the transfer of Open Software principles from the computer industry to science (i.e. Open Science):

A third problem is that science is a long, long, long, long, long way from being a modular knowledge construction discipline. Whereas writing code forces the programmer to compile the code, and the standard distribution forces a certain amount of interoperability, scientists typically write up their knowledge as narrative text. It's written for human brains, not silicon compilers. Scientists are taught to think in a reductionist fashion, asking smaller and smaller questions to prove or disprove specific hypotheses. This system almost guarantees that the tasks fail to achieve modularity like software, and also binds scientists through tradition into a culture of writing their knowledge in a word processor rather than a compiler. Until we can achieve something akin to object-orientation in scientific discourse, we're unlike to see the distributed innovation erupt as it does in culture and code (Wilbanks, 2009).

In delving further into the politics of how the word “open” functions, it is useful to turn to Nathaniel Tkacz’s Wikipedia and the Politics of Openness (2015). This book occupies an intriguingly unusual pocket triangulating between epistemology, linguistics/language deconstruction, and the politics of social polity formation—all through the prism of granular analysis of the 34 arguments in favor and 17 arguments against a certain Wikipedia entry being deleted or preserved. Tkacz talks about the shield of opacity, an immunity from political examination, which the word “open” enjoys:

I hope to have made clear that the general deployment of the open in institutional politics, and as a political concept more generally, cannot be separated from its emergence in software and network cultures. Indeed, it is perhaps more accurate to posit that today’s openness is evidence of the networked and computational, even cybernetic, nature of governance. Through these multiple trajectories, openness is placed in a variety of settings, articulated alongside different concepts, and put to use in different ways. The open circulates, scales up, garners new allies, is reconfigured, distinguished, and remixed; each movement troubles and destabilizes the articulation of its meaning.

Of all the authors cited in the account of openness I have developed here, for example, very few have turned a critical eye to the open, and there has been very little criticism about specific open projects. If a critical word is written, it is rarely substantial and most likely about how one small component can be made better, more open. Somewhat ironically, once something is labeled open, it seems that no more description is needed. Recalling Kelty’s remarks, openness is the answer to everything and it is what we all agree upon (Tkacz, 2015).

Digital Heretics Who Question “Open"

Apparently there are a few heretics who venture to lift the hood of the car and see what’s beneath, to pierce the shield of opacity, the immunity which “open” enjoys. One example of such a heretic is Michael Gurstein. After attending an Open Knowledge (OK) conference, he writes:

The ideal that these revolutionaries are pursuing is not, as with previous generations—justice, freedom, democracy—rather it is “openness” as in Open Data, Open Information, Open Government. Precisely what is meant by “openness” is never (at least certainly not in the context of this conference) really defined in a form that an outsider could grapple with (and perhaps critique).  Rather it was a pervasive and animating good intention—a grail to be pursued by warriors off on a joust with various governmental dragons.

Another heretic is Gary Hall. While Hall is a vociferous advocate of open access, he is also critical of some of the assumptions of “open,“ going as far as addressing the “violence“ which the ethos of openness and transparency conceals:

The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. [...] Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall, 2011). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured (Hall, 2011).

Then we have The Circle, a dystopian novel by Dave Eggers which caricatures the smug utopianism of a fictional Google type company (the “Circle“) as a Scientology-like cult obsessed with transparency. The Techno-Eden company‘s faux-friendly thinly humanistic veneer conceals a fascist-like intolerance of any employee maintaining an independent interior life (or privacy) outside of The Circle’s 24/7 transparency megaplex.

Perhaps most intriguing is Wendy’s Chun suggestion in her book Programmed Visions that “open“ is merely a compensatory mechanism: as computers become more un-readable to the layman and the density of their operations increasingly opaque, users are given more to see, more is made superficially visible or “open“ (to allay anxiety our that computers have become inscrutable):

As our machines increasingly read and write without us, as our machines become more and more un

A Possible (though only provisional) Solution?

In Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov rails against the use of the word “internet“ to refer to wildly different entities. In a similar vein, in order to militate against the over-elasticity of the term “open“—an umbrella term under which blatantly contradictory ideologies are indiscriminately lumped together—I propose we instead break down the moniker of “open“ into more specific denominations. For example:

(1) “affirmative open” vs. “transgressive open”

(2) “communist open” vs. “capitalist open”

(3) “activist open” vs. “entrepreneurial open”

(4) “rejectionist open” vs. “accommodationist open”

While it is beyond the scope of this essay to delineate all four categories, in order to explain #4 “rejectionist open vs. accommodationist open” I might refer to the work of Gary Hall on the Creative Commons License. Hall critiques the standard Creative Commons License as still accommodating private intellectual property ownership, in contrast to Dmytri Kleiner’s notion of anti-copyright which rejects outright copyright (Hall, 2011). Creative Commons License would be “accommodationist open,” while Kleiner’s anti-copyright concept would be “rejectionist open.” The below excerpt from Kleiner’s Telekommunisten Manifesto exemplifies Kleiner’s staunch rejection of intellectual property rights:

Intellectual property is fraud, a legal privilege to falsely represent oneself as the sole ‘owner’ of an idea, expression or technique and to charge a tax to all who want to perceive, express or apply this ‘property’ in their own productive practice. It is not plagiarism that dispossesses an ‘owner’ of using an idea, it is intellectual property, backed by the invasive violence of a state that dispossesses everyone from the use of their common culture. […]

Kleiner then criticizes the Creative Commons License as an accommodation with the status quo intellectual property regime:

What began as a movement for the abolition of intellectual property has become a movement of customizing owners’ licenses. Almost without notice, what was once a threatening movement of radicals, hackers and pirates is now the domain of reformists, revisionists, and apologists for capitalism. (Kleiner, 2010).

Hall also laments that ‘radical’ theorists enamored with activist movements, writing books on the commons (sometimes published by the likes of Verso or Zero Books) sometimes fail to consider the political and economic ramifications of publishing with a profit-maximizing corporation instead of with an Open Access or copyfarleft license. (Hall, director of Centre for Postdigital Culture publishes “Liquid books”: experimental, post paper-centric digital books based on principles of “open editing"/open access; books with no fixed end or beginning, inviting readers to remix, reformat, and reinvent the book, called “Unidentified Digital Objects”).

As a guideline for coming up with a way to parse out the different strands of “open,” it would be helpful to use David Auerbach’s chart as a starting point. New York-based technology writer David Auerbach (author of Bitwise: A Life in Code) wrote an essay “#JeNeSuisPasLiberal: Entering the Quagmire of Online Leftism” (Auerbach, 2015) whereby he attempted to tease out different factions of the liberal-left using a chart with four quadrants“:

I propose Auerbach‘s chart simply as a provisional framework, because 1.) It seeks to differentiate an entity or phenomenon that heretofore has only been treated as a monolithic mass, into more specific denominations 2.) It then attempts to chart these sub-categories (or factions) into 4 quadrants, then uses a North-South/ East-West axis as a way to set these 4 sub-categories in relation to each other and in relation to certain characteristics. To construct an equivalent of Auerbach's chart (but instead for the Open movement) would be instrumental in differentiating between different denominations of Open:

References

Auerbach, David. 2015. “#Je Ne Suis Pas Charlie: The Quagmire of Online Leftism.” American Reader. vailable at: http://theamericanreader.com/jenesuispasliberal-entering-the-quagmire-of-online-leftism/

Bauwens, Michel. 2013. Thesis on Digital Labor in an Emerging P2P Economy,” Digital Labor: The Internet as Playground and Factory. New York: Routledge.

Birchall, Claire. 2011. “Transparency, Interrupted: Secrets of the Left, Between Transparency and Secrecy',” Theory, Culture and Society, December 1. Vol. 28 No. 7-8.

Chun, Wendy. 2013. Programmed Visions, Software and Memory. Cambridge, MA: MIT Press.

Dean, Jodi.; Anderson, Jon; Lovink, Geert., ed. (2013). Reformatting Politics: Information Technology and Global Civil Society. New York: Routledge.

Dyer Witheford, Nick. 2015. Cyber-proletariat: Global Labour in the Digital Vortex. Chicago: University of Chicago Press.

Eggers, Dave. 2014. The Circle. New York: Vintage.

Friedersdorf, Conor. 2012. “How Team Obama Justfiies the Killing of a 16-Year-Old American.“ The Atlantic. October 24. Available at: https://www.theatlantic.com/politics/archive/2012/10/how-team-obama-justifies-the-killing-of-a-16-year-old-american/264028/

Gurstein, Michael. 2011. “Are the Open Warriors Fighting for Robin Hood or the Sherriff? Some Reflections on OKCon 2011 and the Emerging Data Divide,” Gurstein’s Community Informatics. July 3. Available at: https://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/.

Gurstein, Michael. 2011. “Open”— “Necessary” but not “Sufficient,” Gurstein’s Community Informatics, July 6. Available at: https://gurstein.wordpress.com/2011/07/06/%e2%80%9copen%e2%80%9d-%e2%80%93-%e2%80%9cnecessary%e2%80%9d-but-not-%e2%80%9csufficient%e2%80%9d/.

Hall, Gary, ed. 2011. Digitize Me, Visualize Me, Search Me, Open Science and Its Discontents. London: Open Humanities Press.

________. 2016. Pirate Philosophy: For a Digital Post-Humanities. Cambridge, MA: MIT Press.

Hardt, Michael. and Negri, Antonio (2004). Multitude: War and Democracy in the Age of Empire. New York: Penguin Press.

Kleiner, Dymtri. 2010. The Telekommunisten Manifesto. Amsterdam: Institute for Network Cultures. Available at: http://media.telekommunisten.net/manifesto.pdf

Kalathil, Shanthi, and Boas, Taylor. 2003. Open Networks, Closed Regimes: The Impact of the Internet on Authoritarian Rule. Washington DC: Carnegie Endowment for International Peace.

King, Jaime. 2013. “Openness and its Discontents,” Reformatting Politics: Information Technology and Global Civil Society. New York: Routledge.

Lessig, Lawrence. 1999. Code and Other Laws of Cyberspace. New York: Basic Books.

________. 2005. “Open Code and Open Societies.” Perspectives on Free and Open Software ed. J. Feller, B. Fitzgerald, S.A. Hissam and K. R. Lahkhani, 349-60, Cambridge MA: MIT Press.

Levy, Steven. 1984. Hackers: Heroes of the Computer Revolution. Garden City, N.J.: Anchor Press/Doubleday.

McKiernan, Erin. 2015. “Open Pledge,” OpenCon 2015. Available at: https://figshare.com/articles/Open_pledge/1609777/3

Morozov, Evgeny. 2011. Net Delusion: The Dark Side of Internet Freedom. New York: Public Affairs.

_________. 2013. To Save Everything, Click Here. New York: Public Affairs.

Obama, Barack. 2009. “Transparency and Open Government,“ Memorandum for the Heads of Executive Departments and Agencies, January 21. Available at: https://obamawhitehouse.archives.gov/the-press-office/transparency-and-open-government

OECD. 2015. “Economic and Social Benefits of Internet Openness.” OECD Digital Economy Papers. No. 257, Available at: https://one.oecd.org/document/DSTI/ICCP(2015)2/en/pdf Poole, Steven. 2013. “To Save Everything, Click Here by Evgeny Morozov,” The Guardian, March 20. Available at: https://www.theguardian.com/global/2013/mar/20/save-everything-evgeny-morozov-review

Purkiss, Jessica. and Serle, Jack. 2017. “Obama’s Covert War in Numbers: Ten Times More Strikes than Bush,” The Bureau of Investigative Journalism. January 17. Available at: https://www.thebureauinvestigates.com/stories/2017-01-17/obamas-covert-drone-war-in-numbers-ten-times-more-strikes-than-bush?

Scholz, Trebor. ed. 2013. Digital Labor: The Internet as Playground and Factory. New York : Routledge. 

Terranova, Tiziana. 2000. “Free Labor: Producing Culture for the Digital Economy.” Social Text. Summer. Vol. 18, Number 2.

Tkacz, Nathaniel. 2015. Wikipedia and the Politics of Openness. Chicago: University of Chicago Press.

Vlavo, Fidele. 2013. “The Digital Hysterias of Decentralisation, Entrepreneurship and Open Community” Transformations Journal. Issue 23. Available at: http://www.transformationsjournal.org/wp-content/uploads/2016/12/Vlavo_Trans23.pdf

Weber, Steven. 2004. The Success of Open Source. Cambridge, MA: Harvard University Press.

Wilbanks, John. 2009. “Open Source Science? Or Distributed Science?“ Science Blogs. October 30 2009. Available at: https://scienceblogs.com/commonknowledge/2009/10/30/open-source-science-or-distrib

Yasseri, Taha. 2016. “Social Aspects of Collaborative Editing on Wikipedia: Revenge, Conflict and War.“ Oxford Internet Institute, University of Oxford. January 19. Available at: https://www.youtube.com/watch?v=DAjb3V_eoeI

Zittrain, Jonathan. 2008. Future of the Internet and How to Stop It. New Haven, CT: Yale University Press.

06 Analogue, Anomalous, Amorphous: The Creative Possibilities of Computation beyond Technocapitalism

Mariana Salera Marangoni 1,
1 Department of Fine Arts, Camberwell College of Arts, London, United Kingdom
m.marangoni@lcc.arts.ac.uk

Abstract

This paper provides a comprehensive analysis of the current state of alternative computation and its implications within the broader technological landscape, proposing a refusal of current computational paradigms propelled by capitalist overproduction, relentless innovation and extractivist materiality. At present, the Information and Communication Industry (ICT) carbon footprint is already equivalent to the aviation industry and is only expected to increase (Freitag et al. 2021), which is a daunting prospect amid the impending climate emergency.

Drawing upon the notion of both self-imposed and external constraints, this study emphasizes the importance of subverting, reimagining, and repurposing technology instead of simply adopting a retro-computing framework, as computers have a long and problematic history regarding their harmful materiality, engineered bias and discrimination towards multicultural practices and epistemes beyond the English-speaking West. The objective of exploring a multiplicity of unorthodox software and hardware isn’t to replace conventional silicon devices entirely, but to offer new research paths and ontological questions of what constitutes computation outside of the heat-exuding, silicon black-boxes that were imposed as the norm.

Keywords

Materiality, Sustainability, Energy Footprint, Unconventional Computing, Esoteric Programming Languages, Computational Culture, Creative Constraints.

Technological hubris on an exhausted Earth

The development and deployment of digital technologies have reached a point where the surface of the planet has been converted into a new stratum of entwined and interconnected technostructures, or long-lasting ‘technofossils’ (Zalasiewicz et al. 2014) as they are rapidly rendered as e-waste. This scenario is fomented by the techno-capitalist loop of increasing obsolescence and surplus that exhaust the planet’s finite resources, powered by the narratives of a so-called age of innovation that is another way to state that we are saturated with failure (Davidson in Kane 2019, 4).

Philosopher of technology Benjamin Bratton claims that “the concept of “climate change” is an epistemological accomplishment of planetary scale computation” (Bratton 2021), an argument that has been firstly discussed in the context of early meteorological science (Ruskin in Pasquinelli 2017). However, this statement overlooks that, in order to develop, build, and deploy such vast apparatuses, a massive explorative and extractivist economy was required, which only accelerated the phenomena it's trying to acknowledge. This conundrum is further explained by media archaeologist Jussi Parikka as “Data feeds [off] the environment both through geology and the energy-demand’ (2015). The relentless drive to measure, map and quantify the Earth and its inhabitants solely for human knowledge always generates a shadow companion – the hastening of resource depletion and climate collapse, which in turn not only modifies the world, but the data collected.

 Yet still, the arguments in favor of ICT potential to optimize and curb carbon emissions of every industrial sector are still abundant, with many claiming that this the only way to eventually achieve carbon neutrality. Those technosolutionist predictions are largely unfounded by actual data, that instead suggests (Freitag et al. 2021) that there is no time in the current state of climate emergency to expand those infrastructures based on implausible Promethean promises.

There is also the often-overlooked risk of a rebound effect or even a Jevons paradox in the greenwashed capitalist pursuit of maximum energy efficiency. In this denial to acknowledge the planetary limits of technological expanse, governments and corporations try to push this rhetoric to the last expense, even if it requires taking exoplanetary exploration as the new frontier to appropriate, exploit and profit, enabling neo imperialist extractive logics to endure for the unforeseen future.

As a consequence, TESCREALists (Gebru and Torres 2023) proponents are fomenting the pursuit of Artificial General Intelligence (AGI) and the myth of technological singularity, driving forth the illusion of a fully-automated, abundant future guided by supra-human intelligence. However, these ‘utopian’ scenarios tend to hide the fact that relentless technological innovation does not only generate new ways of supporting life, but also new degrees of suffering and extinction. Recent machine learning models, such as Open AI’s and Google’s Large Language Models (LLM), are based on the maxim of 'bigger and better', and require ever-increasing access to data, computational power, and infrastructure that would be impossible to be replicated outside big techs. The ones who benefit from those advancements are usually less affected by the deadly consequences they are uttering upon the lives of many others, widening the gap of social inequality to an unforeseen level of epistemological abyss – the ones who prevail in the glorious distant future of an AI-powered interplanetary humankind (Kurzweil 2005, Moravec 1999), and the ones forgotten amidst the debris of their old, exhausted planet.

The creation of a necroculture

The necrocultural (Thorpe 2016) implications of capitalist societies have been brought up to attention by many scholars within the fields of postcolonial theory, critical theory, and Marxist theories of technology and power, such as Achille Mbembe’s necropolitics (2003), Georges Bataille notion of surplus and expenditure (1949), Michel Foucault’s biopower (1976), and Byung-Chul Han’s notion of a capitalistic death-drive (2019). Their contributions laid the groundwork for this paper to contribute to the ongoing debates by intersecting consolidated socio-political theories with computational engineering and design for a new understanding of the necro-legacies of computation particular to the accelerated but vapid state of contemporary technological innovation.

Digital Necropolitics or data necropolitics, drawing directly from Mbembe’s notion, are more focused on the damages of a culture influenced by algorithms and online representation of bodies and the spectacularization of death. Conversely, this study approaches ICT advancements from a more infrastructural and ecological perspective, focusing on the violence and subjugation of planetary ecosystems as a mere resource. It also involves a Marxist and decolonial approach attuned to racial justice, as they are necessary to fully analyze the disparity of labor conditions within the tech industry, as it is very reliant on an invisibilized global workforce that is not welcomed in their shiny Silicon Valley offices nor has access to competitive salaries or benefits. Those workers are people primarily from the Global majority that extract, assemble, and deal with the aftermath of short-lived electronic devices turned into e-waste, contributing to an overseas prosperity in lieu of their own health and local environment.

Another layer of the problems of modernity are the ones that fail to be assimilated by the capitalist system as ‘useful’ members of society - either as consumers or exploited workers, such as Indigenous peoples and Local Communities. They are the last beacon of a pre-industrial way of life, which maintains the environment in the lands owned, managed, used, or occupied by them significantly less disturbed than in other circumstances (IPBES 2019). However, their resilience and traditional ways of living are under increasing pressure as the impacts of industrial growth are affecting the whole global ecosystem. Tragically, capitalism thrives by usurping the resources left untouched by gentler, slower forms of living, at the same time it tends to entrap and subsume any non-capitalist organization (Luxemburg in Bauman 2004, 70) in an inescapable spiral.

A proliferation of Alternatives

Beside the persistent reminder that "It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism” (Jameson 1994, XII), this section focuses on presenting a comprehensive but not exhaustive list of unconventional and alternative ways of researching and implementing new computational models that steers away from the necrocultural practices of the field. They all embrace, in different levels, a radical rethinking of digital materiality - imagining what configurations of future computation may look like in a collapsing scenario where the tools we take for granted today are no longer viable, or, what else could be built when decoupling computers from a capitalist notion of maximized efficiency as surrogate labor (Parisi and Ferreira da Silva 2021).

In fact, computers have a long history of engineered exclusion intertwined in its architecture, be it software or hardware, as well as a notorious association with US military research that made possible many of the tools and features widely available today (Edwards 1997). At the advent of personal computers, engineers prioritized the encoding of the Latin alphabet for English usage with 8-bit character encodings, keeping non-Latin languages like Arabic or Japanese virtually impossible to be handled by the limited motherboard memory of the time. More than 40 years later, most high-level programming languages demonstrate how Western Imperialism has shaped Computer Sciences, as no broadly used programming language in industry today is optimized for anyone other than English speakers, which demonstrates how dominant systems have their biases and colonial rationale disguised as natural, given realities.

In a “prompted everything” era it can be challenging to step back and inspect the inner logics of current black-boxed devices to understand how operational systems (OS) and programming languages (PL) work and propose radical alternatives to already established paradigms. The time and hard skills involved leave out many people who can’t afford to be this critical of big tech and industry practices, turning this into a very white male-dominated, hardcore-programmer niche that flirts with purist ideologies and bourgeois romanticism (Mansoux et al. 2023).

However, it could also be argued that the populations of many global majority countries face material limitations and economic hardships that requires ingenious alternative solutions as a way to hack reality in everyday life, exemplified by Brazilian ‘gambiarras’ and Cuban ‘rikimbilis’, what researcher Ernesto Oroza names as ‘technological disobedience’ (2012) or ‘objects of necessity’ (2006). That way, it becomes clear that anti-capitalist alternatives to computation are, in most part, very mundane work that requires maintenance, care and a calculated use of resources that will never be as exciting and fast as big tech products and the commodity of the cloud, but it may as well be the last option in an increasingly exhausted planet. The contemporary exploration of alternative computation and radical computer engineering is manifold and draws from the notion of Liberatory technology (Bookchin 1971), Marxist neo-luddism (Robins and Webster 1999), Indigenous programming (Corbett et al. 2020) and ethnocomputing (Tedre et al. 2006), followed by recent concepts within radical computer engineering research such as Permacomputing (Heikkilä 2020), Benign Computing (Raghavan 2015) and Collapse Informatics (Penzenstadler et al. 2015).1

A proliferation of Alternatives: Software-level

In the specific scope of software engineering and programming language creation, the emphasis are the ones who don’t have usability as their main goal, but as a socio-political tool to question why, how and by whom these technologies were shaped in a certain way, and which deeper conversations arise as we shift the affordances and capitalist logics of those tools. It’s not a surprise then to notice that unconventional computing shares many principles that resonates with queer code studies, as “the notion of queer code is both the subject and the process of the work, and this operates on multiple levels, “queering” what would be considered to be the normative conventions of software and its use” (Soon and Cox 2020).

Within the realm of zeroes and ones, some nuances of reality might get lost or be completely incomputable, and researchers willing to explore and engage with in-betweenness in programming is not usual as it often gets in the way of predictability and code optimization. Winnie Soon’s ‘Vocable Code’ (2017) offers a critical and poetic perspective on what could be considered a queer coding practice, treating source codes as an executable language that can be a form of creative expression while still maintaining its original goal of giving a set of instructions to the machine.

As it is extremely challenging for humans to instruct the machine by using only binary code, the necessity of incorporating a more human-centric approach was quickly noticed, culminating in the creation of FLOW-MATIC by Grace Hopper and her team in 1958. Since then, a multitude of high-level programming languages ended up being designed for all sorts of specific purposes throughout the years, which only further complicated Computer Science relationship with natural languages (Marino 2020).

In the 70s, the Chinese2 language had its existence threatened by the introduction of digitalization, as the first personal computers prioritized Latin characters encodings baked directly into their architecture. In addition, the extremely limited memory of those devices made the digitalization of the at least 2000 simplified characters – and their subsequent inputting via QWERTY keyboards into a herculean task that required an extraordinary feat of creativity and engineering that culminated in the Wubi Input Method by Wang Yongmin in 1983 (Hou 2020), nowadays only one of the many methods of digitally processing Chinese characters.

Even without the problem of memory storage, “the computer compels compliance” (Marino 2020) to this day, as many of the problems persist for any aspiring programmers who don’t have familiarity with the English language. Those tensions are particularly visible in Ramsey Nasser’s قلب (Qalb) (2013), which was extremely difficult to implement due to the use of Arabic characters not being supported by most text editors, making the act of creating a functional version of it a defiant anti-imperialist statement against the lingua franca of programming.

Figure 01.Screenshot of a program in Cree# by Jon Corbett. (http://wg20.criticalcodestudies.com/index.php?p=/discussion/71/week-2-cree)
Figure 02.Screenshot of 文言 Wenyan-lang by Lingdong Huang. (https://wy-lang.org/)
Figure 03.Screenshot of an قلب (Qalb) script by Ramsey Nasser. (https://nas.sr/%D9%82%D9%84%D8%A8/)

A proliferation of Alternatives: Hardware-level

Alternatively, the explorations on a hardware level open even more pathways for radically different machines that don’t rely on resource-depleting and high energy-consuming models, even if their wide adoption in industry and daily activities are improbable. After decades of Moore’s law, contemporary society has learned to take processing speed and sophistication for granted, which set the bar high for any feasible but clunky alternatives. There are also other possibilities such as quantum computing and nano-computing that are currently being developed and massively funded by neoliberal governments in partnership with tech industry (Press 2023), as they promise groundbreaking performance in the uncharted territory of subatomic scale.

It has already been proven that different bio-molecular systems could function as logic gates, such as fluids (Adamatzsky 2019), slime mould (2015) and tree roots (Adamatzky et al. 2018), and more recently, that a transistor made entirely of wood is possible (Tran 2023). They act as provocations and examples of a collaboration between human ingenuity and non-human intelligence, widening the notion of what it means to compute.

Figure 04.Physarum logic gates in “Slime mould processors, logic gates and sensors”, Andrew Adamaztky 2015. (https://royalsocietypublishing.org/doi/10.1098/rsta.2014.0216#d1e797)
Figure 05.The components of the wood transistor. ‘Electrical current modulation in wood electrochemical transistor’, Tran et al. 2023. Photo: Thor Balkhed, LiU. (https://liu.se/en/news-item/varldens-forsta-tratransistor)

These proposals challenge the mainstream by presenting a variety of existing ontologies rather than adhering to a single dominant perspective. In doing so, they move away from the westernized, sexist and heteronormative “focus on stable machines, stable instruments and stable knowledge” (Pickering 2016, 15) alongside with a dualistic view that only endorses efficiency and usability – if something doesn’t work as intended, it cannot be promptly streamlined into the relentless capitalist mode of production, and is therefore irrelevant.

It is evident that harnessing bio-molecular systems for computation does not present a perfect solution, either. Despite the allure of a ‘greener’ materiality, the question of cross-species consent is largely overlooked in these cases, which poses many ethical questions towards the harnessing of non-human intelligences solely for human purposes.

Tending vulnerable horizons

The nonconformity of those propositions may prevent them from becoming materialized and widely accessible like the over 2 billion computers in the world that will probably end up forgotten and obsolete in a landfill. Conversely, the goal of the experimental approaches outlined above aren’t to replace conventional silicon devices entirely, but to offer new research paths and ontological questions that decouple computation from digitality, allowing the imagination of a multitude of analogue, anomalous and amorphous tools that can perdure without exploitation and self-destruction.

None of the routes presented here are flawless, but their strength lies in the fact that it is possible to reimagine and resist the necrocultural aspects of current ICT industry practices to allow vulnerable horizons to flourish, challenging the naturalization of deeply flawed dominant systems. There won’t be a single, universal framework that can effectively address these complex challenges, and this diversity of cultures and ecosystems is exactly what current computational paradigms lack.

Perhaps, the most revolutionary realization is to accept that anti-praxis is also a valid stance against the unsustainable digital practices taken for granted by the affluent North, which will invariably collapse in a not-so-distant future where the tools we take for granted today are no longer viable.

Figure 06.A Venn diagram illustrating all the alternatives outlined within this paper.

References

Adamatzky, Adam. 2015. “Slime mould processors, logic gates and sensors”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. The Royal Society. Accessed June 05, 2023. https://doi.org/10.1098/rsta.2014.0216.

Adamatzky Adam. 2019. “A brief history of liquid computers”. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 374(1774), 20180372. Accessed June 05, 2023. https://doi.org/10.1098/rstb.2018.0372.

Adamatzky, Adam. et al. 2018. “Computers from Plants We Never Made: Speculations’. In: Stepney, S., Adamatzky, A. (eds) Inspired by Nature. Emergence, Complexity and Computation, vol 28. Springer, Cham. Accessed June 05, 2023. https://doi.org/10.1007/978-3-319-67997-6_17.

Bataille, Georges. 1949. The Accursed Share: An Essay on General Economy Volume I, translated by R. Hurley. 1988, New York: Zone Books.

Bauman, Zygmunt. 2004 Wasted Lives: Modernity and its Outcasts. Cambridge: Polity Press, p. 70.

Bookchin, Murray. 1971. “Ecology and Revolutionary Thought”, Post-Scarcity Anarchism. Berkeley: Ramparts Press, p. 58.

Bratton, Benjamin. 2021. “Planetary Sapience”, Noema Magazine. Accessed May 20, 2023. https://www.noemamag.com/planetary-sapience/.

Corbett, Jon, Laiti, Outi, Lewis, Edward Lewis and Temkin, Daniel. 2020. “Week 2: Indigenous Programming” (Main thread) (Online Forum). CCS Working Group 2020. Accessed June 20, 2023. http://wg20.criticalcodestudies.com/index.php?p=/discussion/70/week-2-indigenous-programming-main-thread.

Edwards, Paul N. 1997. "Why Build Computers? The Military Role in Computer Research", The Closed World: Computers and the Politics of Discourse in Cold War America. London: The MIT Press.

Foucault, Michael. 1976. The Will to Knowledge: The History of Sexuality Volume 1, translated by R. Hurley 1998., New York: Penguin.

Gebru, Timnit and Torres, Émile P. 2023, February 16. “Eugenics and the Promise of Utopia through Artificial General Intelligence”. IEEE SaTML 2023 - 1st Conference on Secure and Trustworthy Machine Learning. [Video]. YouTube. Accessed June 20, 2023. https://www.youtube.com/watch?v=P7XT4TWLzJw.

Han, Byung-Chul. 2019. Capitalism and the Death Drive, translated by D. Steuer 2021. Cambridge: Polity Press.

Hou, Jue. 2021. “The Cybernetic Writing Pad: Information Technology and the Retheorization of the Chinese Script”, 1977-1986, East Asian Science, Technology and Society: An International Journal, 15:3, 310-332. Accessed May 08, 2023. https://doi.org/10.1080/18752160.2021.1925398.

Heikkilä, Ville. 2020. Permacomputing. Accessed March 15, 2023. http://viznut.fi/texts-en/permacomputing.html.

IPBES. 2019. Global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondizio, J. Settele, S. Díaz, and H. T. Ngo (editors). IPBES secretariat, Bonn, Germany. Accessed June 07, 2023. https://doi.org/10.5281/zenodo.3831673.

Jameson, Fredric. 1994. The Seeds of Time. New York: Columbia UP, p. XII.

Kane, Carolyn L. 2019. High-Tech Trash: Glitch, Noise, and Aesthetic Failure. Oakland: University of California Press.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology, New York: Viking Penguin.

Mansoux, Aymeric, Howell, Brendan, Barok, Dušan, and Heikkilä, Ville-Matias. 2023. “Permacomputing Aesthetics: Potential and Limits of Constraints in Computational Art, Design and Culture”. In LIMITS ’23: Workshop on Computing within Limits, June 14–15, 2023.

Marino, Mark C. 2020. Critical Code Studies, The MIT Press: Cambridge, Massachusetts, p. 153.

Mbembe, Achille. 2003. “Necropolitics”, Public Culture, Volume 15, Number 1, Winter 2003, pp. 11-40, Duke University Press.

Moravec, Hans. 1999. Robot: Mere Machine to Transcendent Mind, New York: Oxford University Press.

Oroza, Ernesto. 2006. “Architecture de la Nécessité ” in Objets réinventés : La Création populaire à Cuba. ed. Pénélope de Bozzi. Paris: Editions Alternatives.

Oroza, Ernesto. 2012. Desobediencia Tecnológica. De la revolución al revolico. In: Ernesto Oroza [blog] June 6. Accessed June 15, 2023. http://www.ernestooroza.com/desobediencia-tecnologica-de-la-revolucion-al-revolico.

Parikka, Jussi. 2015. A Geology of Media. Minneapolis: University of Minnesota Press.

Parisi, Luciana and Ferreira da Silva, Denise. 2021. “Black Feminist Tools, Critique, and Techno-poetics”, E-flux Journal, Issue #123, Dialogues on Recursive Colonialisms, Speculative Computation, and the Techno-social. Accessed March 15, 2023. https://www.e-flux.com/journal/123/436929/black-feminist-tools-critique-and-techno-poethics/.

Pasquinelli, Mateo. 2017. “The Automaton of the Anthropocene: On Carbosilicon Machines and Cyberfossil Capital”, South Atlantic Quarterly South Atlantic Quarterly 116 (2): 311–326, Duke University Press.

Penzenstadler, Birgit, Raturi, Ankita, Richardson, Debra J., Silberman, M. Six and Tomlinson, Bill. 2015. “Collapse (& Other Futures) Software Engineering” Proceedings of the First Workshop on Computing within Limits. ACM, Irvine California, p. 1-3.

Press, Gil. 2023 “New Funding for Quantum Computing Accelerates Worldwide” Forbes. Accessed June 12, 2023. Accessed May 07, 2023. https://www.forbes.com/sites/gilpress/2023/01/31/new-funding-for-quantum-computing-accelerates-worldwide/?sh=76586a8eb35b.

Pickering, Andrew. 2016. “The Ontological Turn: Taking Different Worlds Seriously”, Social Analysis Journal, p. 1. Accessed May 07, 2023. https://doi.org/10.3167/sa.2017.610209.

Raghavan, Barath. 2015. “Abstraction, Indirection, and Sevareid’s Law: Towards Benign Computing”. Proceedings of the First Workshop on Computing within Limits. ACM, Irvine California, p. 1–4.

Robins Kevin and Webster Frank. 1999. Times of the Technoculture: From the Information Society to the Virtual Life, London: Routledge.

Soon, Winnie and Cox, Geoff. 2020. Aesthetic Programming: A Handbook of Software Studies, Open Humanities Press, p. 168.

Soon, Winnie. 2017. Vocable Code. Accessed Jun 20, 2023. https://siusoon.net/projects/vocablecode.

Tedre, Matti, Sutinen, Erkki, Kähkönen, Esko and Kommers, Piet. 2006. “Ethnocomputing: ICT in cultural and social context” Communications of the ACM, Volume 49, Issue 1 (January 2006), 126–130.

Thorpe, Charles. 2016. Necroculture. New York: Palgrave Macmillan.

Tran, Van Chinh, et al. 2023. “Electrical current modulation in wood electrochemical transistor,”. Proceedings of the National Academy of Sciences. Accessed May 07, 2023. https://doi.org/10.1073/pnas.2218380120.

Zalasiewicz, Jan, et al. 2014. “The Technofossil Record of Humans.” Anthropocene Review 1, no. 1: 34–43. Accessed May 14, 2023. https://journals.sagepub.com/doi/abs/10.1177/2053019613514953?journalCode=anra.

Footnotes

  1. This vast proliferation of interrelated terms and definitions are being gathered and organized by artist and researcher Marloes de Valk in the ‘Damaged Earth Catalog’ (2023) as part of her PhD research project, in which the term ‘permacomputing’ stands out for its ties with permaculture principles. She also raises concerns over the lack of intersectional feminist representation in the scene. Available online at: https://damaged.bleu255.com/about/.

  2. The term ‘Chinese’ fails to encompass all the different languages spoken outside of mainland China, such as Mandarin, Taiwanese, and Cantonese, as well as many minor dialects. Because their written form uses traditional or simplified Chinese characters, these languages were grouped together here as ‘Chinese’ for the sake of clarity, as a more in-depth analysis of this topic would be beyond the scope of this paper.

07 The Atlas of Dark Patterns: Charting New Spaces of End User Consent

Darija Medić 1,
1 PhD candidate, Intermedia Art, Writing and Performance, University of Colorado Boulder. Boulder, USA
darija.medic@colorado.edu

Abstract

This essay introduces the foundations for The Atlas of Dark Patterns, a practice based PhD research project focused on redefining perspectives on design for user consent in contemporary algorithmic interfaces through participatory reenactment performance. In it we first explore how ethical issues are conventionally defined in the field of Human Computer Interaction in terms of their involvement with extractivist practices of the attention economy. We then observe how behavioral design informs dark pattern implementation, weaponizing the physiological basis of cognition within the nervous system. We explore how The Atlas of Dark Patterns approaches the method of reenactment within participatory performance to provides contexts in which the affective space between users and their services can be enacted, revealing unconscious mechanisms behind the act of everyday user consent. Following the example of one of its case studies, The Terms of Service Fantasy Reader, we see how a new dark pattern emerged in the performances, expanding the landscape of dark patterns to include emotionally manipulative language weaponizing ethics of care. In order to situate such performance outputs, The Atlas of Dark Patterns forms as a proposal for a participatory resource for bottom-up mapping of (non) consent in user experience (UX) design as defined by lived expertise of end users.

Keywords

Dark Patterns, Attention Economy, User Experience Design, Re-enactment, Performance, Consent, Microboundaries

Dark Patterns as Active Disablers of Consent

Figure 01.Stylized YouTube notification screenshot

In everyday personalized user experience (UX) design, suggestive and persuasive messages as the one in the image above can commonly be seen blurring the experience of consent, choice and autonomy, leaving users feeling tricked or plainly bad for not acting in accordance with the default system settings. In the realm of Human-Computer Interaction, the academic home of UX design, examples of non-consensual design are analyzed through the concept of dark patterns – an umbrella term grouping various forms of deceptive design implemented towards a maximizing of profit (Narayanan et al. 2020; Bhoot et al. 2020; Gray et al. 2018) Originating from design practice, the term refers to commonly discussed examples of persuasive design1 which implement interaction models obstructing users’ perceived perspective of choice. UX designer Harry Brignull coined the term in 2010 as “carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind” (Brignull 2010). He initiated a categorization on the website deceptive.design (once called dark patterns) in which the defined types have been growing throughout the years2. Some examples of types include strategies for tricking a user into paying more than what was initially claimed through various methods such as or Hidden Subscription3 Sneaking4, Preselection5 and Hidden Costs6 A salient everyday example of these types in combination would be trying to book a cheap flight in which the booking process disorients the user through an overwhelming experience of suggestively including new expenses on every step of the way to purchasing one’s ticket. Some forms of dark patterns are noticeable from a usability perspective (such as Forced Action), while others are more subtle. Different people may experience these various types differently, which is why the diverse deceptive design practices are commonly used in combination.

Figure 02. A diagram showing how users of the online platform Reddit started classifying adverse experiences with design into dedicated subreddits, source r/assholedesign7

What connects these UX models is that they are based on designed constraints which mask or disable existing options within a system so that the user is directed into an interaction model that monetarily benefits the service. In effect, dark patterns are a continuation of existing practices in product design such as crippleware, in which software or hardware features are deliberately disabled until, for the case of software, a user purchases an upgrade8. Dark patterns take these monetizing strategies a step further into constraining and actively nudging the user towards a preferred profitable path of interaction, using diverse strategies to create an appearance of a lack of alternative. What makes them patterns is that they have shown to be working strategies from a usability perspective to the extent that they have been adopted across different platforms. More importantly, what makes them dark is that they obscure both the user’s perception of choice and the awareness of manipulation taking place.

Increasing research on manipulative user experience design practices (Gray et al. 2018, Liguri et al. 2021) shows that there is an awareness and a growing space of ethical implications and regulations (Leiser and Santos 2023) around persuasive interaction design implementations. At the same time, the field of personalized interfaces is developing fast, showing a need to devise diverse forms of dark pattern detection and manipulation literacy (Lewis 2014). As a way of approaching the growing space of deceptive design, my research project focuses on the experience of end users in how they detect and relate to non-consensual interactions. In my doctoral thesis, I apply participatory action research principles, based on grounded theory, in which the participants taking part in the public performances shape the factors they are affected by as well as the ways they relate to these factors.

Figure 03.A chosen set of dark pattern examples for the participatory performance The Terms of Service Fantasy Reader during HASTAC 2023, Pratt Institute, NY.

Dark Patterns and the Body

‘Extractivism’ is a term most often understood in relation to large-scale, profit-driven operations for the removal and processing of natural resources such as hydrocarbons, minerals, lumber, and other materials. In an extended sense, the term refers more generally to a mindset in which resources serve a means-ends function, becoming commodities to be extrapolated and turned to profit. (Parks 2021)

Extractivism within design is generally discussed in terms of cognitive and surveillance capitalism (Zuboff 2019), in which emotional and cognitive resources, personal memories and experiences are extracted for profit. In this space, human centered design based on empathy has been criticized as a form of exploitation and extraction of a person's emotional landscape for the creation of user personae, a generalized prototype of a user profile used to base design decisions on (Costanza-Chock 2020). Dark patterns however point to an additional type of extractivism, one of bodily physiological resources. Often functioning as active agents, they both influence and adapt to users' behavior – manifesting as interaction feedback by locking in behavioral options. Within this paradigm of design suggestive personalized services in effect distribute the act of consent between the interface and the user on the level of unconscious interaction. Such mechanisms in UX design connect to a long history of controlled experiments from models such as the Skinner box (or operant conditioning chamber), a laboratory environment created for the study of animal behavior.9 Already as early as the 1950’s findings from these types of experiments started consolidating with the development of predictive technologies10 into what today is called behavioral science.

In the field of user experience behavioral science is the foundation informing interaction models through behavioral design, a set of methods applying behavioral economy11 principles in order to learn from factors that influence people’s decision-making processes. Behavioral economy directly relies on how bodies learn, and therefore studies the physiological aspects of cognition and affect. It mainly targets the reptilian brain, the part of the nervous system which controls adrenaline and dopamine activators. The sympathetic nervous system12 is also key in this process, as its activation is strategically used for the purpose of maintaining user attention. To the user, this can appear as an amplification of very specific forms of attention and sensory pathways on the account of others, such as when one suddenly realizes they have been continuously scrolling catered content for the past several hours. One of the risks of such a behavioral practice is that through training the nervous system, types of interaction can be habituated into an almost automatic response, decreasing consciousness in relation to one’s behavior, and desensitizing empathic capacities. Subsequently, the sole ability to notice implementations of subtle and shifting forms of dark patterns can easily diminish through habitual repetition of nudged or, in other words, extracted consent. The implementation of behavioral design in effect weaponizes13 the resources of the nervous system as a form of bodily extractivism for behavioral manipulation.

Some of the first well elaborated critiques of adapting feedback mechanisms came as early as 1950, such as Norbert Wiener's The Human Use of Human Beings, in which he reflected on the ethical risks of implementing predictive machine learning systems, what he named in the book as an “inhuman use of human beings” (Wiener 1950).14 Today predictive technologies are ubiquitous and as machine learning algorithms advance, the use and extraction of human cognitive and affective resources improves, developing subtler forms of persuasion and targeted UX. Subsequently, the ethical issues surrounding consequences of data manipulation powered by behavioral design grow in parallel. One of such issues is the effect of a habitual normalization of supernormal stimuli,15 directly affecting people’s emotions and decision-making. This practice has been identified as what is called “hijacking the amygdala”16 , the tendencies of which we can see in dark pattern examples such as Fake Scarcity, which communicates an imagined lack of resources a user is interested in to elicit a fear of loss.

The Atlas of Dark Patterns as a Performance Landscape

The Atlas of Dark Patterns explores the landscape of extractivist (Parks 2021) behavioral design of the user personae through studying the affective elements that shape the experience of automated personalization in particular around the affordances of contemporary dominant UX consent. Since the project aims to broaden the scope of factors that should be taken into account when approaching consent in design, it focuses on different people’s lived experiences, attitudes and tactics of consenting. The project doesn’t approach dark patterns from a solutionist mindset, but aims to diversify methods for providing end user perspectives. It is intended to build a body of experiential knowledge on addressing (un)consensual interaction design through collecting and displaying contributions from realized performances. By mapping phenomena that participants felt as non-consensual in the way they have taken place, the method allows approaching dark patterns and the act of consent as a spectrum rather than a binary (in the case of consent) or rigid classification (in the case of dark patterns).

What is particular to algorithmically driven systems such as those present in one’s smartphone is that they are in effect deeply behavioral, adapting and influencing agents designed to actively attempt constant interaction with the user they are customized for. Common everyday examples of such personalized software behavior are instances when one receives a notification about digital photo albums made from what the service assumes were travel/holiday photos made especially for and catered to them, or that their personal data has been automatically changed for them such as their date of birth suddenly being updated.

In my research I approach concepts such as agential assemblage in intra-action (Barad, 2007), understanding agency as that which is distributed between entities, 1) both human and non-human and 2) between the conscious, affective and unconscious. I approach this space of distributed agency from a behavioral standpoint, as an intended outcome of attention economy driven design, focusing on aspects in which instances of harmful habituation can occur. One of the ethical concerns this project revolves around is the effect of deceptive design normalization, meaning the acceptance and adoption of an absence of consent that is pervasive in attention economy driven17 interfaces as unproblematic through daily experience. In order to investigate the space of user interaction and behavior, I explore performance as an action based method for addressing practices around consent that are not always rational. Rather than placing participants into an interview or questionnaire setting as research subjects, collective performance allows them an active role as well as access to resources of affect as a way of resensitizing the attention resources of the nervous system. The performances are ad-hoc because they are done in various settings (festivals, conferences, exhibitions) and gather people who are in contact for the experience of the performance and contribution to the project but have most often never met before. They are collective because a group setting doesn’t put pressure and responsibility on an individual or isolated user to carry knowledge or provide all the questions/answers, but those become formed in group exchange in a supportive environment. The performance settings explore practices of enactment of various, mostly opaque, user experience conditions, giving these situations a voice and offering a shift of perspective that foregrounds how and what bodies themselves learn, through interaction and habituation. For that reason, The Atlas of Dark Patterns proposes collective participatory re-enactment performance18 as a critical practice method in order to explore the execution of mediated algorithmic behavior.

Figure 04.Collective participatory phone notification reenactment performance “How do you sit with it?”, 2022. CU Boulder

The Terms of Service Fantasy Reader

The Atlas of Dark Patterns builds from the output of several action-based research studies. One of them, The Terms of Service Fantasy Reader is an ad-hoc public rehearsal of dramatizing Terms of Service agreements as an inquiry into the opaque conditions of consent. Running as a public participatory performance, The Terms of Service Fantasy Reader is communicated as a designated space and time for the luxury of reading out the Terms of Service19 of various applications in use on participant’s personal devices, focusing on the least clear, misleading or for any other reason strange language found. The practice of a collective reading allows participants to effectively vocalize what they experienced as a felt tone of the language provided in a shared space. As each participant reads out their chosen segments, the contributions get recorded into a growing interactive online archive20 and web drama, accompanied by screenshots of suggestive app notifications.

Figure 05.The Terms of Service Fantasy Reader web interface (Act 2)

Why explore Terms of Service specifically? Inherently non-consensual in the way they are constructed and implemented they can be seen as the legal backbone of the extractivist behavioral design paradigm. These legal difficult-to-read documents get "signed" on a daily basis and, when downloading and installing an app, their default settings are often unquestioned. With options to either accept all the conditions and participate in a world of affordances of platform selfhood, or decline and be left out of social circles, these formats effectively facilitate a consensus based on a lack of choice. In other words, their consenting mechanism can be perceived as a dark pattern in itself.

In the conducted instances of the performance, participants experienced being first in the role of a chosen Terms of Service agreement and then were offered a space of response from their position of end users. Through the various iterations of The Terms of Service Fantasy Reader and the practice of choosing and enacting segments, participants identified a pattern of persuasive language carrying an unusually emotional tone for what would be considered a legal document. Further detection of the types of emotional connotations located in the various examples showed that there is a narrative pattern of a weaponization of empathy for the purpose of persuasion. Such a pattern is present both in Terms of Service agreements and, more overtly, in diverse everyday app messages and notifications. These examples lie between the dark patterns of Confirmshaming and Trick Wording, exploiting ethics of care as a form of a surrogate relationality. For the purpose of The Atlas of Dark Patterns glossary, these subtle forms of deceptive design were named microtoxicities, to represent experiences of unease in consenting through various forms of external pressure accumulated over time.

Figure 06.Screenshot of Duolingo notification for inactive users employing emotional tone for the purpose of persuasion

The outcomes from the Terms of Service Fantasy Reader project raise the question of whether more subtle forms of uninformed consent are in fact ethically “darker” by being less transparent. They also suggest that these forms of microtoxicities carry the tendency of amplifying existing power asymmetries through harmful habituation. Moreover, what emerged from the enactment settings were participant expressions from the role of end users, showing responses in the form of personal microboundaries21(Cecchinato et al. 2015, Anna L. et al. 2016). What these findings show is that there is an existing need for a wider context and acknowledgment of lived user experience in approaching the topic of ethics and consent in user experience design. As one possible format towards that goal, The Atlas of Dark Patterns will be developed with the use of spatial concepts to gather the various written, visual, and vocal outputs of the participatory performances and reflect overlaps between different types of dark patterns from felt user experience, allowing a bottom-up fluid categorization of non-consensual design.

The resources that this project provides are a call to more public attention towards the nuance of ethical aspects of behavioral design practice around consent. By developing public participatory performances the project offers experiential means of detecting dark pattern instances. Subsequently, by mapping everyday tactics of approaching dark pattern landscapes as potential microboundaries, the Atlas is envisioned as a tool that can provide methods for supporting resilience in defining and obtaining informed consent within the current market-driven digital realm. Finally, a much broader question this research forms is what can informed and conscious consent look and feel like outside the constraints of the attention economy.

Figure 07. The Terms of Service Fantasy Reader public performance, SLSA 2022, Purdue University, Indiana

Video

Acknowledgements

I would like to thank Luísa Ribas and Christopher Watters for their invaluable feedback on this text as it was forming.

References

Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham and London: Duke University Press. https://doi.org/10.1215/9780822388128

Bhoot, Aditi M, Mayuri A. Shinde, and Wricha P. Mishra. 2020. “Towards the Identification of Dark Patterns: An Analysis Based on End-User Reactions”. In IndiaHCI '20: Proceedings of the 11th Indian Conference on Human-Computer Interaction (IndiaHCI 2020). Association for Computing Machinery, New York, NY, USA, 24–33. DOI:https://doi.org/10.1145/3429290.3429293

Brignull, Harry. 2010. “Dark Patterns: dirty tricks designers use to make people do

stuff.” Harry Brignull’s 90 Percent of Everything. https://90percentofeverything.com/2010/07/08/dark-patterns-dirty-tricks-designers-use-to-make-people-do-stuff/index.html accessed 23.06.2023.

Cecchinato, M.E., Cox, A.L., & Bird, J. 2015. “Working 9-5? Professional Differences in Email and Boundary Management Practices”. Proceedings of the SIGCHI Conference on Human Factors in Computing systems. Seoul: South Korea.

Costanza-Chock, Sasha. 2020. Design Justice: Community-led Practices to Build the Worlds We Need. Cambridge, Massachusetts, The MIT Press.

Cox L, Anna, Sandy J.J. Gould, Marta E. Cecchinato, Ioanna Iacovides, and Ian Renfree, 2016. “Design Frictions for Mindful Interactions: The Case for Microboundaries.” In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). Association for Computing Machinery, New York, NY, USA, 1389–1397. DOI:https://doi.org/10.1145/2851581.2892410

Fitzduff, Mari. 2021. “The Amygdala Hijack”, Our Brains at War: The Neuroscience of Conflict and Peacebuilding, https://doi.org/10.1093/oso/9780197512654.003.0003, accessed 26 June 2023.

Gray, Colin M, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L. Toombs. 2018. “The Dark (Patterns) Side of UX Design.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, Paper 534, 1–14. https://doi.org/10.1145/3173574.3174108

Kropotov, D. Juri. 2009. “Chapter 13 - Affective System”, Quantitative EEG, Event-Related Potentials and Neurotherapy, Academic Press, 292-309

Leiser, Dr Mark and Santos, Dr Cristiana, 2023. “Dark Patterns, Enforcement, and the emerging Digital Design Acquis: Manipulation beneath the Interface”. Social Science Research Network. https://ssrn.com/abstract=4431048

Lewis, Christopher. 2014. “Understanding Motivational Dark Patterns.” In Irresistible Apps. Apress, Berkeley, CA: https://doi.org/10.1007/978-1-4302-6422-4_8

Narayanan, Arvind, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar. 2020. “Dark Patterns: Past, Present, and Future: The evolution of tricky user interfaces”. Queue 18, 2, Pages 10 (March-April 2020), 26 pages. DOI:https://doi.org/10.1145/3400899.3400901

Parks, Justin. 2021. “The poetics of extractivism and the politics of visibility.” Textual Practice, 35:3, 353-362, DOI: 10.1080/0950236X.2021.1886708

Raymond, S. Eric. 1996. The New Hacker's Dictionary, third edition, MIT Press Academic

Redström, Johan. 2006. “Persuasive Design: Fringes and Foundations”. In Persuasive Technology. 112–122. DOI:https://doi.org/10.1007/11755494{_}17

Simon, Herbert A. 1971. “Designing Organizations for an Information-rich World.” Computers, communications, and the public interest. Baltimore, MD: Johns Hopkins University Press. pp. 37–52.

Tinbergen, Niko. 1951. The Study of Instinct. Oxford, Clarendon Press. ISBN 978-0-19-857343-2

Wiener, Norbert. 1950. The Human Use of Human Beings: Cybernetics And Society. New edition. New York, N.Y: Da Capo Press

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. London, England: Profile Books.

Footnotes

  1. While design can be understood as inherently persuasive (Redström. 2006.), persuasive technology is defined broadly as “any interactive computing system designed to change people’s attitudes or behaviors” (Fogg, 2003, p. 1).

  2. At the time of writing, the categorization on the website includes 16 defined types.

  3. “The user is unknowingly enrolled in a recurring subscription or payment plan without clear disclosure or their explicit consent.” https://www.deceptive.design/types/hidden-subscription, accessed 23.06.2023.

  4. “The user is drawn into a transaction on false pretences, because pertinent information is hidden or delayed from being presented to them.” https://www.deceptive.design/types/sneaking, accessed 23.06.2023.

  5. “The user is presented with a default option that has already been selected for them, in order to influence their decision-making.” https://www.deceptive.design/types/preselection, accessed 23.06.2023.

  6. “The user is enticed with a low advertised price. After investing time and effort, they discover unexpected fees and charges when they reach the checkout.” https://www.deceptive.design/types/hidden-costs, accessed 23.06.2023.

  7. https://www.reddit.com/r/assholedesign/, accessed May 2022, now private.

  8. In the case of hardware, the disabled hardware means that users need to buy extra parts for existing but disabled elements. See The New Hacker Dictionary (Raymond 1996)

  9. A laboratory apparatus developed by B. F. Skinner in operant conditioning experiments to study animal behavior, which typically contains a lever that delivers reinforcement in the form of food or water upon being pressed.

  10. See more in Jill Lepore’s book IF:THEN, How Simulmatics Invented the Future.

  11. “Behavioral economics is a discipline examining how emotional, social and other factors affect human decision-making, which is not always rational.“ source: https://www.interaction-design.org/literature/topics/behavioral-economics accessed 23.06.2023.

  12. The part of the nervous system which activates the body’s“fight-or-flight” response.

  13. In this particular context weaponization refers to the way in which neurotransmitter flows become potential weapons for directing user interaction, intentionally circumventing conscious choice as a form of subconscious manipulation

  14. Making a parallel to fascist societies he pointed to a risk of the loss of freedom and increase of censorship in a fully automated society, due to habitual reinforcement of roles, functions and acts, as well as an exploitation of human resources for increasing profit of factory owners

  15. A sensory stimulus the concentration of which is of an intensity not found in a natural habitat (Timbergen 1951), such as fast food, porn or in the context of user experience, interface design tactics such as the infinite scroll or targeted anger engagement on services such as YouTube.

  16. The amygdala is “a structure detecting threat or potential punishment and thus generating negative emotions such as fear and anxiety” (Kropotov 2009). An amygdala or emotional hijack is an immediate, overexaggerated response out of affect, triggered by a perceived threat (Goleman, 1996) an example of which is violent and offensive reactive behavior such as what happens in some online arguments called flame wars for their flammable nature of disagreement and affect escalation, which can lead to physical aggression, war mobilization, etc. (Fitzduff 2021).

  17. An approach of treating attention as a resource and applying principles of economics for the purpose of attention management, first laid out by psychologist and economist Herbert A. Simon (Simon 1971).

  18. The performances apply methods from socio psychodrama, an action based group therapy modality based on the theory of roles, spontaneity and surplus reality, opening the space to explore the narrative that these interface interactions create between service and user in a collective setting.

  19. Terms of Service agreements are legal documents that define the grounds upon which someone can engage with a certain service, but they are conceptually demanding, long, and often hermetic to read. At the same time, they are one of the main battlegrounds for user rights.

  20. Source: https://termsofservicefantasyreader.com/act-2/ accessed 23.06.2023.

  21. Practices by users done to limit the negative effects of intrusive digital experiences.

08 Re-Valuing RS Through Configure-Able Methods

George Simms 1,
1 I-DAT, University of Plymouth, Plymouth, UK
hello@georgesimms.net

Abstract

This essay introduces Configure-Able methods as a means to re-value recommender systems (RS), bringing together STS, software studies and crip studies to produce a generative toolset to help imagine and achieve radical new abilities for infrastructures. Configure-Able methods enable us to negotiate the socio-political relations of infrastructure, examining their imaginaries in dialogues with their matterings to display disparities in their realities, but more importantly as a toolset to generate and enable alternatives. Taking RS as a case study this project aims to use configure-able methods to examine their histories, philosophies and matterings, and imagine new configurations informed through open-software, intersectional critiques and material knowledges.

Keywords

Recommender systems, Crip Studies, Trans*Feminism, Software Studies, Science Technology Studies, Social Sciences, AI, Reinforcement Learning, Configure-Able Methods.

Configure-Able Methods and RS

Configure-able methods are rooted in the work of enabling intersectional4 criptiques into the methods and imaginaries of sociotechnical systems, at its core asking how these knowledges can help us transform relations of control into ones of care and self-exploration enabled through life affirming infrastructures5. Informed by the radical work of Sins Invalid (2015) (Fig.1) and Healing Justice London (2022), transforming community and disability justice through the voices, experiences and methods of those living through it. Following this agenda, configure-able methods aim to be a generative toolset to reconfigure sociotechnical infrastructures into ones that can be led from their subject and not from a generalized and prescribed overview. This reconfiguring toolset aims to transform the imaginaries of a big tech into minor, subjective, and dynamic ones, which can enable the people within them to have new agencies over what their community infrastructures are imagined as and how this comes into being.

Figure 01.Sins Invalid 10 principles (2015)

Recommender Systems (RS), the case study of this essay, are one of the most used intersections with media on cloud platforms6, producing addictive selections of media (Chun 2016) you didn’t know existed and making the unfathomable challenge of exploring the abundance of cloud media possible. RS does this through automated valuing, sorting, ordering, and breaking down the mass of cloud data into hierarchies and comparative assemblages (Noble 2018) reinforcing how we imagine relationships as essentially comparative and hierarchical. The more classic and clear examples of these hierarchies are the likes of Noble’s critique of google rank, where results are ranked within a linear list, from most valued to least, but with the likes of talkative search engines (GPT utilized by Bing), and the complex animatic dynamics of feeds these hierarchies are being performed through new complex and less transparent forms which I will attempt to critique and re-value through configure-able methods.

The current configuration of media platforms and their RS algorithms are performed through monolithic process of control and ownership, giving them greater agency within the social/political relationality of ability in their network. Their socio-technical infrastructures do this by gatekeeping, augmenting and ordering our experience and representation of the world through them. These monolithic configurations of platforms are currently being (have always been) challenged by open and free software commoning of potential other configurations. One of the predominant examples, especially after the twitter takeover, is Mastodon on the activity pub protocol, providing an infrastructure to create decentralized7 and federated8 networks and communities. Through these new configurations you see how there is a shift within the political/ relational model of ability (Kafer 2013) within the infrastructure, enabling the people using it to configure and form their communities to their needs and desires. This shift moves away from the gatekeeping and extractivist methods of corpo-social platforms to ones that enable people to control their data, move their accounts to different servers9, use the same data between different platforms or access it from different software10. Most importantly this relationship creates a space where people can develop their own abilities to create communities and places safe from infrastructural inequality and harm11. This reformation of infrastructure to enable people to define their own configuration of imaginaries and their matterings is a prime example of what I am calling configure-able methods.

Figure 02.Comparison of standard enclosed proprietary stack (left) vs open fediverse stack (right). Proprietary stack enclosing elements to ransom for profit (e.g. data, connections, communities). Inversely federated stack enables communities to determine these infrastructural decisions and relations. Diagram by author.

Much like Chun and Barnett’s analysis of Latent vectors17 (Chun and Barnett 2021), RL systems are configured not only through imaginaries and the language of segregation, exploitation, and extraction, but also mathematic principles such as optimal control. In many AI systems, optimal control is embodied in their configuration, but is implemented through their loss function. Loss values the performance of a system to then tweak its potentially many thousands to billions of parameters towards an optimal outcome. Loss as a metaphor figures these complex dynamics into singular narratives of prospective loss and gains (economically) which can be navigated through Ferreira da Silva’s (2016; 2017) analysis of efficient causality18. She traces this thread through Newton, Descartes and Galileo, and sees it extracted from the rest of Aristoteles' four causalities (Falcon 2023). This configuration of efficient logics relies on the functionalities of mathematics to legitimize claims, instead of accounting for the whole, or final cause. Through this reductionist approach algorithms are only legitimized if they work, and not through a more complex relation of how they work, and what effect they may have on the environment around them. The legitimizing mathematics of the loss function that “work” are defined by rather rudimentary logics of differencing19, how far spatially and geometrically is the “truth” from the action, often squared to exaggerate differences and clearly define outliers. Through configure-able methods we must question these outdated and destructive dynamics to then imagine how we could configure these infrastructures otherwise (Ferreira da Silva 2016; Pritchard 2018; Soon and Cox 2021), implementing contemporary STS, social science, crip trans*feminist and intersectional approaches and models into their configuration.

RS Criptique

Stepping into these technologies through configure-able methods I have developed a criptique of the “cherry on the cake” (LeCun 2016) of RS and predictive AI, reinforcement learning (RL). The metaphor of the cherry on the cake for LeCun, Facebook's VP and Chief AI Scientist, represents the ability for RL to transform the bulk of intelligent systems (the cake) into adaptable ones that can take on a wider set of unknowns. This is amplified by both Google and OpenAI engineers suggesting that RL powered models are the most likely step into sentience or unified/general intelligence. The cherry and its stone here can be re-read as a metaphor for RL as the element that will bring this trained body of (un)supervise NNs into life. It is clear that RL is imagined as the keystone to sentient intelligence, this being recently reinforced by RL’s amazing abilities within LLMs15 like GPT models, but what do the concepts these systems are based on say about them? And how could we implement them otherwise?

RL itself is founded around the criticized concepts of animal learning (Fig. 3) by Edward Thorndike (1898), which configures a penal system of roles for animals or agents alike, where good behaviors are rewarded with freedom and nourishment and bad ones lead to the subject being incarcerated or punished until they get it right. RL here symbolizes the Promethean imaginary of platform and cloud capitalism (Ferreira da Silva 2016; Parisi and Ferreira da Silva 2021), the escaping from enslaved and obscure nature into ordered freedom16. These implementations of penal logic have formed powerful dynamics within intelligent systems, but are very limited due to their reductive description of their problem spaces and environment through a simplistic reward (penal) system, often just one or few dimensions in description. If we also take a look at the language and metaphors (Cowan and Rault 2022) used to describe RL in practice it orientates wordings like “exploration vs exploitation”, “maximize reward” and “Greedy Policies”, reinforcing extractivist imaginaries for these technologies to play out.

Figure 03.Thorndike’s Cat “Puzzle Box” (Chance 1999, 434)

Much like Chun and Barnett’s analysis of Latent vectors17 (Chun and Barnett 2021), RL systems are configured not only through imaginaries and the language of segregation, exploitation, and extraction, but also mathematic principles such as optimal control. In many AI systems, optimal control is embodied in their configuration, but is implemented through their loss function. Loss values the performance of a system to then tweak its potentially many thousands to billions of parameters towards an optimal outcome. Loss as a metaphor figures these complex dynamics into singular narratives of prospective loss and gains (economically) which can be navigated through Ferreira da Silva’s (2016; 2017) analysis of efficient causality18. She traces this thread through Newton, Descartes and Galileo, and sees it extracted from the rest of Aristoteles' four causalities (Falcon 2023). This configuration of efficient logics relies on the functionalities of mathematics to legitimize claims, instead of accounting for the whole, or final cause. Through this reductionist approach algorithms are only legitimized if they work, and not through a more complex relation of how they work, and what effect they may have on the environment around them. The legitimizing mathematics of the loss function that “work” are defined by rather rudimentary logics of differencing19, how far spatially and geometrically is the “truth” from the action, often squared to exaggerate differences and clearly define outliers. Through configure-able methods we must question these outdated and destructive dynamics to then imagine how we could configure these infrastructures otherwise (Ferreira da Silva 2016; Pritchard 2018; Soon and Cox 2021), implementing contemporary STS, social science, crip trans*feminist and intersectional approaches and models into their configuration.

Figure 04.Different loss functions that are applied to the distance between the “truth” and the output. (‘ML | Common Loss Functions’ 2019)

Configure-Able RS in practice

There is great power in the configure-ability of these analytic systems, of being able to perform them otherwise in new revealing formations. Examples of an iconic work that creatively reconfigures sociotechnical infrastructures is the likes of Joy Buolamwini’s Gender Shades (Buolamwini and Gebru 2018) using facial recognition AI analytics and a custom dataset to analyze the algorithm’s own racism and gender biases. There are also examples of infrastructures being formed through and empowering crip wisdom, with Melt forming ACCESS SERVER (Fig.5), which is an email server that anonymously mediates and financially compensates access requests20 that disabled people send towards cultural institutions. ACCESS SERVER reconfigures the awkward and labor inducing process of requesting access and enables both the person and the institution in this exchange. It enables the person requesting access by providing anonymity and a third party to talk through to ease the dialogue, while also compensating them for each email sent. It enables the institutions to create a freer flowing dialogue where they can hear about and improve their accessibility from those affected by it and, in this exchange, the ACCESS SERVER also tries to provide them with all of the resources to inform the change needed. Both of these works pulling from the artists/researchers lived experiences and only come into being through the nuances of their intersectional approaches.

Figure 05.Melt’s diagram of the ACCESS SERVER protocol. (MELT 2022)

So many of the tools to do these sorts of reconfiguring either lay within the complexities of code and its hierarchies of accessibility and understanding, where they may take years to learn and be able to implement, otherwise they are often very awkwardly arranged through predefined and limiting paid for and/or extractavist cloud platforms. Runway ML, one of the original for-profit creative AI tool platforms, is configured by taking open-source AI models and implementing them through their web interface, charging for the computation of the training of models and generating of content. The key here is that most accessible home machines and the people using them would not be able to train or run these algorithms, forcing most people to only enable them through some sort of cloud computing infrastructure. This can also be said about the creation of datasets for these models, which would be difficult and very laborious to produce if you did not use some web scrapping or other cloud extraction methods. This relational ability is again not primarily situated in one place, such as the hardware, but it is also inferred in the attitudes and politics of investors and developers focusing on the growth of these large-scale and inaccessible tools and infrastructures.

To practice configure-able RS I am coming together with my studio, Imaginary Practices (IP)21, to collaboratively develop different works to imagine through practice how RS and AI infrastructures might be configure-able. Informed by accessible and transformative AI projects, like Rebecca Fiebrink’s Wekinator (Fiebrink 2015) and Gene Kogan’s ML4A(Kogan 2016), this project aims to form new intersections into these mechanisms, enabling RS and AI to be configure-able to different imaginaries and mechanisms. The collaboration with IP has so far produced a few preliminary tests starting to form new tools and dynamics within AI infrastructures, but for this essay I will discuss two.

Figure 06.F*MNIST dataset input sheet to add new disruptive indexes. Made by Megan Benson, member of IP.

The first project is called the F*MNIST dataset, it is a dataset that we made collaboratively with a wide group of feminists, informed by Fermenting Data (Tyżlik-Carver, Rossenova, and Fuchsgruber 2022). Whilst forming the data we created a conversation, around what a dataset and data could be through feminist approaches. The work’s title itself is a play on the MNIST22 dataset, which is a standardized and normalized dataset that was/is a foundational benchmark for computer vision models. By F* the MNIST dataset we questioned what the standards of a dataset are and how we could renegotiate what normalizing this data could mean and how we contextualize these indexes of data so that they are not isolated and efficient images. To do this people inputting data were encouraged to be disruptive and unstandardized (whatever that meant to them), allowing space for people to add their own new indexes to the dataset (Fig. 5), as well as more information about themselves and the context of their input. This dataset is fluid and continually being added to and is formatted in linked open data (LOD) to create a complex web of data, relations and people in semantic triples that are open and linkable, which radically differ from the anonymous and isolated standards of the original MNIST.

Figure 07.The re-reader recursively re-reading this page of text. Each step in is a different training, and shows how it maps and divides the space differently. Made by the author.

The second work with IP is the re-reader, an outcome from an exercise imagining what roles AI can play within text analysis, communal reading, and research. We have been playing with a few different configurations, focusing on how highlighting and other preexisting affordances of digital reading can be made intelligent through small scale, local technologies that can help us renegotiate what we read. One of the most promising experiments from this work so far is an AI variational autoencoder23 that is trained from a local and small dataset of texts, compressing this set of knowledge into a three-dimensional latent space. This segregational latent space, which I mentioned Chun’s critique of, takes on a new form of non-prescriptive and performative, enabling the person to retrain it and reform its ontology and splitting of space. Once trained, this latent encoding or ontology is then used to highlight a text sentence by sentence to the color (RGB) determined by the three-dimensional latent encoding of that sentence (Fig. 6). This compositing of fluid and nondeterministic ontologies enables us to navigate texts simultaneously through these different dimensions and dynamics, the ones we read, and the ones re-read by the trained model and interpreted by us. The model used here is also lightweight and quick to run and train locally on most consumable computers.

These are two early attempts to enable AI and RS to be configure-able to different methods and, through these actions, we have realised that making RS and AI analytics configure-able also means to have their concepts, imaginaries and matterings adaptable and accessible too. To do this we have started to entangle these often siloed approaches into a collaborative wiki/docs alongside the code. By doing this we are attempting to enable ourselves to form new dialogues, metaphors and matterings in the creation of these works which can move beyond the efficient ones of contemporary AI. Configure-able methods here means for these intersections of language and practice to be generative re-imaginings of infrastructures. These methods bring with them not so many answers, but create space for questions and conversation around how these systems are configured and what that means for us as people and communities living within their political/relational modeling, enabling us to rethink, rename, remetaphorize and revalue their processes.

From this essay, we have seen the need for this change within RS, but the same is true in many other intelligent and future imagined tech infrastructures, as they need a deep rethinking and reapproaching. This project and its understanding of configure-able methods is very much at the beginning of its journey, but I see hope in this methodology and the radical agendas it grows from, as well as the promising steps made so far. It is a promising starting point for us, Imaginary Practice, to be able to begin this essential questioning of how we can configure these (intelligent) sociotechnical infrastructures otherwise.

References

Alexander, M. Jacqui. 2005. Pedagogies of Crossing: Meditations on Feminism, Sexual Politics, Memory, and the Sacred. Perverse Modernities. Durham [N.C.]: Duke University Press.

Aouragh, Miriyam, Seda Gürses, Femke Snelting, Helen V. Pritchard, and Jara Rocha. 2021. "Counter Cloud Action Plan: NEoN Digital Ethics Audit". 2021. http://titipi.org/?projects/digital-ethics-audit.

Benjamin, Ruha. 2016. "Racial Fictions, Biological Facts: Expanding the Sociological Imagination through Speculative Methods". Catalyst: Feminism, Theory, Technoscience 2 (2): 1–28. https://doi.org/10.28968/cftt.v2i2.28798.

———. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity.

Bonhomme, Edna, Mario Guzman, Femke Snelting, and Pinar Tuzcu. 2020. "Bug Report: Tuning to Trans*feminist Xystem.Crash". 31 October 2020. http://meltionary.com/meltries/r.html.

Buolamwini, Joy, and Timnit Gebru. 2018. "Gender Shades". 2018. http://gendershades.org/.

Chance, Paul. 1999. "THORNDIKE’S PUZZLE BOXES AND THE ORIGINS OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR". Journal of the Experimental Analysis of Behavior 72 (3): 433–40. https://doi.org/10.1901/jeab.1999.72-433.

Chun, Wendy Hui Kyong. 2016. Updating to Remain the Same: Habitual New Media. MIT press.

Chun, Wendy Hui Kyong, and Alex Barnett. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press.

Cowan, Tl, and Jasmine Rault. 2022. ‘"Introduction: Metaphors as Meaning and Method in Technoculture". Catalyst: Feminism, Theory, Technoscience 8 (2). https://doi.org/10.28968/cftt.v8i2.39036.

Falcon, Andrea. 2023. "Aristotle on Causality". In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Spring 2023. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2023/entries/aristotle-causality/.

Ferreira da Silva, Denise. 2014. "Toward a Black Feminist Poethics: The Quest(Ion) of Blackness Toward the End of the World". The Black Scholar 44 (2): 81–97. https://doi.org/10.1080/00064246.2014.11413690.

———. 2016. "On Difference without Separability". Catalogue of the 32a São Paulo Art Biennial,‘Incerteza Viva’(Living Uncertainty), 57–65.

———. 2017. "1 (Life) ÷ 0 (Blackness) = ∞ − ∞ or ∞ / ∞: On Matter Beyond the Equation of Value’, e-flux journal no. 79 (February). https://www.e-flux.com/journal/79/94686/1-life-0-blackness-or-on-matter-beyond-the-equation-of-value/.

Fiebrink, Rebecca. 2015. "Wekinator 2.0". http://www.wekinator.org

Gumbs, Alexis Pauline. 2018. M Archive: After the End of the World. Durham ; London: Duke University Press.

Hamraie, Aimi, and Kelly Fritsch. 2019. "Crip Technoscience Manifesto". Catalyst: Feminism, Theory, Technoscience 5 (1): 1–33. https://doi.org/10.28968/cftt.v5i1.29607.

Healing Justice Ldn. 2022. "About. Healing Justice London (blog). 2022. https://healingjusticeldn.org/about/.

‘How to Apply This Recommender System for My Website? · Issue #1664 · Twitter/the-Algorithm’. 2023. GitHub. 5 April 2023. https://github.com/twitter/the-algorithm/issues/1664.

Kafai, Shayda. 2021. Crip Kinship: The Disability Justice & Art Activism of Sins Invalid. Vancouver: Arsenal Pulp Press.

Kafer, Alison. 2013. Feminist, Queer, Crip. Bloomington, Indiana: Indiana University Press.

Kogan, Gene. 2016. ‘ML4A’. https://ml4a.net/.

LeCun, Yann. 2016. ‘Predictive Learning’. Presented at the NIPS, December 5. https://nips.cc/Conferences/2016/ScheduleMultitrack?event=6197.

MELT. 2022. ‘Meltionary - ACCESS SERVER’. 31 March 2022. http://meltionary.com/accessserver.html.

‘ML | Common Loss Functions’. 2019. GeeksforGeeks (blog). 18 November 2019. https://www.geeksforgeeks.org/ml-common-loss-functions/.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Parisi, Luciana, and Denise Ferreira da Silva. 2021. "Black Feminist Tools, Critique, and Techno-Poethics", e-flux journal no. 123 (December). https://www.e-flux.com/journal/123/436929/black-feminist-tools-critique-and-techno-poethics/.

Pritchard, Helen. 2018. "Critter Compiler". EXECUTING PRACTICES, 237.

Rogers, Reece. 2023. "How Threads" Privacy Policy Compares to Twitter’s (and Its Rivals’)’. Ars Technica. 8 July 2023. https://arstechnica.com/security/2023/07/how-threads-privacy-policy-compares-to-twitters-and-its-rivals/.

Shelton, Samuel Z. 2020. "Disability Justice, White Supremacy, and Harm Reduction Pedagogy: Enacting Anti-Racist Crip Teaching". JCSCORE 6 (1): 190–208. https://doi.org/10.15763/issn.2642-2387.2020.6.1.190-208.

Sins Invalid. 2015. "10 Principles of Disability Justice". Sins Invalid. 17 September 2015. https://www.sinsinvalid.org/blog/10-principles-of-disability-justice.

Soon, Winnie, and Geoff Cox. 2021. Aesthetic Programming: A Handbook of Software Studies. London: Open Humanities Press.

Suchman, L. 2012. "Configuration". Inventive Methods, 62–74.

Thorndike, Edward L. 1898. "Animal Intelligence: An Experimental Study of the Associative Processes in Animals" The Psychological Review: Monograph Supplements 2 (4): i–109. https://doi.org/10.1037/h0092987.

Tyżlik-Carver, Magdalena, Lozana Rossenova, and Lukas Fuchsgruber. 2022. "Curating/Fermenting Data: Data Workflows for Semantic Web Applications: Curating/Fermenting Data". In Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference, 1–5. Aarhus Denmark: ACM. https://doi.org/10.1145/3547522.3547701.

Footnotes

  1. Digital infrastructures here denote the software, hardware, protocols, systems, and management of digital/computational technology that enable/scope our social digital abilities. This can be cloud drives, word processors, server stacks, local networks, GPS and of course recommender systems.

  2. Suchman describes configuration as: “Configuration in this sense is a device for studying technologies with particular attention to the imaginaries and materialities that they join together, an orientation that resonates as well with the term’s common usage to refer to the conjoining of diverse elements in practices of systems design and engineering.” (Suchman 2012)

  3. The empowering of people, communities, and their infrastructures through crip wisdom.

  4. Shayda Kafai says, “Sins Invalid defines intersectionality this way: ‘Simply put, this principle says that we are many things, and they all impact us. We are not only disabled, we are also each coming from a specific experience of race, class, sexuality, age, religious background, geographic location, immigration status, and more. Depending on context, we all have areas where we experience privilege, as well as areas of oppression... We gratefully embrace the nuance that this principle brings to our lived experiences, and the ways it shapes the perspectives we offer.’"(Kafai 2021, 31)

  5. Life affirming infrastructures (not essentially digital) are ones that enable people and communities to explore, share and learn from one another, instead of being prescribed set solutions that delegitimize situated knowledges and lived experiences. This relation implies an ability to form infrastructures outside of extracting, toxic, prescriptive, and rent seeking economies.

  6. Examples are Facebook feeds, Instagram reels and TikTok’s for you, YouTube shorts and more.

  7. Decentralized here denotes that one organization doesn’t control all the assets of and infrastructure, but it is configured through many nodes/communities.

  8. Federated here means a decentralized network where not every node is connected, but where each node dictates how they want to be networked and connect to.

  9. Saying this, unfortunately a large number of these platforms are run on AWS, recentralizing them in the hardware of corporate cloud.

  10. Unfortunately this example has been recently (within a few days) troubled by Meta’s release of a Twitter clone called Threads, with promises of being federated, which will be interesting to see how they configure it and how the communities deal with Meta’s entrance into their ecologies, but the underlying attitudes of control and extraction are clear from Threads’ terms of service (Rogers 2023).

  11. Mastodon allegedly being the origin of content warning blurred images/media, as well as being federated to enable communities to decide which servers they can interact with.

  12. Following the work of Ruha Benjamin (2016; 2019), Alexis Pauline Gumbs (2018), M. Jaqui Alexander (2005), and Denise Ferreira da Silva (2014; 2016; 2017), to understand how we can move beyond destructive carceral and segregationist ideologies and implementations within technoscience.

  13. The redistribution of cloud infrastructures from corporate to community (Aouragh et al. 2021).

  14. The de-embedding of colonial concepts and practices through crip and trans*fem approaches (Kafer 2013; Sins Invalid 2015; Hamraie and Fritsch 2019; Bonhomme et al. 2020; Shelton 2020; Kafai 2021).

  15. Large Language Models.

  16. As Ferreira da Silva and Parisi put it: “Importantly, this cosmogony must include the myth of Prometheus, as the autopoietic creator and mythical origin of technology for the modern world. As much as this myth corresponds to the belief in human progress, it also ensures that the technology of fire evolves into the steam engine of the modern bio-economic Man, telling the origin story of humanity as one of freedom from enslavement, from the obscurity of the unknown, and from Man’s own death.”(Parisi and Ferreira da Silva 2021)

  17. Where unsupervised AI systems form dimensional features that can be extracted and exploited.

  18. Denise Ferreira da Silva explains how “first, the 17th century philosophers who called themselves ‘modern’ devised a knowledge program that was concerned with what they called the ‘secondary (efficient) causes’ of motion, which cause change in the appearance of things in nature, and not with the ‘primary (final) causes’ of things, or the purpose (end) of their existence; second, instead of relying on Aristotle’s (384-322 a.C.n.) logical necessity for the assurance of the correctness of their findings, philosophers such as Galileo relied on the necessity characteristic of mathematics, more precisely, on geometrical demonstration as the basis for certainty”(Ferreira da Silva 2016).

  19. There are a number of types of loss, but almost all are static, exaggerating forms of comparisons (e.g. squared error).

  20. Melt put it as “Access requests explain what a disabled person needs to attend spaces, be they online or physical.”(MELT 2022)

  21. Made up of coder/hacker/researchers Yasmin Morgan, Megan Benson, Katie Tindle.

  22. Acronym of: Modified National Institute of Standards and Technology (MNIST)

  23. A variational autoencoder is a type of unsupervised AI that learns to dimensionally reduce inputs (text, images, etc.) into a latent dimension, to then be decompressed back to its original form.

09 Computing atmospheric attunement and hybrid listening through Augury and Scrying

Juan Carlos Duarte Regino 1,
1 Aalto University, Espoo, Finland
juan.duarte@aalto.fi

Abstract

In this essay, I will elaborate on Augury, an artistic project that draws inspiration from ancient methods of perceiving and predicting atmospheric events, with a focus on utilizing weather data as a foundation for sound. By employing sonification techniques, the project aims to enhance our ability to connect with the elements of wind and natural radio. This endeavor finds its roots in ancient practices, such as augury, where individuals relied on animal senses, and symbolic objects for weather divination such as in scrying, where people would look into obsidian mirrors to perceive beyond what is evident. By emphasizing listening as a means of sensory perception, the project bridges the gap between traditional knowledge and contemporary technology. The project refers as an allegory to the augury birds to the self-made weather stations used in the installation to gather data from wind patterns, air pressure, and humidity levels, which are used in the sound installation as materials for creating sound. This way, the data generates sonic elements that represent different facets of weather phenomena. Augury thus serves as a means of reconnecting with ancient weather sensing methods while embracing modern technology. Through active listening, participants can immerse themselves in the intricate soundscapes of wind and natural radio, fostering a deeper connection with the ever-changing dynamics of the natural world.

Keywords

Sensing The Weather, Prediction Of The Weather, Weather Sonification, Augury, Scrying, Divinatory Technologies.

Introduction

In this essay, I will delve into the primary sources of inspiration for Augury, an artistic project rooted in ancient techniques used to sense and predict atmospheric phenomena. This artistic practice seamlessly merges art and technology, mythology and science, attempting to deepen the understanding of ancient practices exploring perception beyond human capabilities, and the significance of certain objects in weather divination. As a result of this understanding, the project explores the use of weather data as the foundation for generating an immersive sonic experience, based on sonification techniques to enhance our perception and connection with atmospheric elements such as wind and natural radio.

This project serves as an example of bridging the gap between traditional knowledge and contemporary technological advancements, particularly through a transdisciplinary approach. The installation Augury serves as a means to reconnect with a form of attentive listening that appreciates the intricacies of the weather. This is achieved by configuring remote sensing technologies following rituals of divination and atmospheric sensing, approaching meteorology from a perspective of deep time1. The emphasis on listening acts as a conduit for our sensory perception of weather, allowing for the integration of three dimensions of knowledge: situated2, embodied3, and mediated4. Each dimension plays a crucial role in sensing and perceiving the weather, and when combined through audible experiences, it is expected that participants of the installation can augment their listening and attune to atmospheric events. This connection with the atmospheric processes is also meant for expanding an ecological bond between technology, nature, and culture5.

Participants of the installation are provided with the opportunity to attune themselves to the intricate melodies and rhythms of wind and natural radio, fostering a profound connection with the natural world and its ever-changing dynamics. This immersive experience cultivates a deeper understanding and appreciation for the complexity of the weather6. The installation also offers the chance to interact with datasets and live streams obtained from the surrounding area near the exhibition space, enabling an engagement with technologies shaped by the ancient practices of weather sensing and divination referenced in this project.

Technologies of weather sensing & divination

Weather stations as proxies-emissaries

The utilization of weather stations in this project draws inspiration from the ancient divination practice known as augury, where the behavior of birds was observed to anticipate future events, and to make state decisions upon political and public life in Ancient Rome (Roger 2005; Lehoux 2012; Driediger-Murphy 2019). Similarly to the birds in Augury, these weather stations are employed as “informants” who gather data as intermediaries, providing remote status of the weather to understand the surrounding condition when it's further processed through sound. In other words, these stations take on the role of emissaries, sensing the atmosphere, beyond human sensorium capacity, and feeding the installation to autonomously produce sound.

Figure 01.Weather station and computing system designed for sonification.

The weather stations are equipped with several essential components to capture and process atmospheric data: first, four electret microphones arranged in a Cartesian position, enabling them to detect wind direction by averaging the predominant peak of the signal across them, however, these are also unwantedly triggered by ambient sounds such as city noise which is perceived as wind, in future iterations of the design this could be improved by being gated by another type of sensor. In addition to the microphones, the weather stations feature a digital barometer, which measures air pressure, and a dust-particle sensor, which detects and quantifies the presence of particulate matter in the air. These stations provide crucial data points for understanding and interpreting the atmospheric conditions at a given area surrounding the place where the exhibition is displayed.

To facilitate the transfer of sensor data, the weather stations employ long-range radio modules7. These modules enable the seamless transmission of data from the weather stations to the user interface or main computing system. This data transfer mechanism ensures that the captured atmospheric information is effectively communicated and utilized in the generation of sound or visual representations within the installation. By combining these components, the artwork creates a comprehensive system for capturing, processing and translating atmospheric data into immersive auditory experiences.

The integration of weather stations, including microphones, a barometer, a dust-particle detector, and long-range radio modules, facilitate a multi-dimensional exploration of atmospheric phenomena. This exploration is achieved through an interactive sonification, where each dynamic is translated into different sound layers that can be experienced simultaneously (Hermann & Hunt 2005). These weather stations not only provide valuable insights into the atmosphere but also represent a mediated form of knowledge. By extending our perception through a combination of technologies, they enable us to sense and comprehend emergent processes and fluid transformations within natural environments.

In line with Katherine Hayles's concept of Non-cognitive Cognition (2017), these technical systems imbue our atmosphere with meaning by employing somatic markers such as chemical or electrical signals that align with their operational principles. Leveraging the cognitive capabilities of computational media, these systems adapt to the changing environment of the atmosphere. Furthermore, considering the agency of ubiquitous computing sensing systems, as discussed by Mark B. Hansen, is relevant in this context. These systems can catalyze sensation on a finer timescale than human perception, operating according to non-biological technical protocols (Hansen 2013), their

Figure 02.Scrying mirror and obsidian pieces used in Augury.

micro-temporal operations might have the potential to facilitate attunement to atmospheric processes.

By combining technological advancements and computational protocols, the installation opens up new possibilities for understanding and engaging with the complexities of our atmosphere.

Smoking mirror

As an artist-researcher with a background-origin in Mexico, in the global South, I was interested to bring into this project the myth of Tezcatlipoca, an Aztec deity of great significance, as closely associated with providence, divination, the night winds, hurricanes, and obsidian stones, the unconscious, among other attributes (Spence 1925; Young 1985). The name Tezcatlipoca itself translates to "smoking mirror" in the Nahuatl language (Paddock 1985; Hicks 2008). In the context of this installation, the concept of the smoking mirror is symbolically represented through the inclusion of mist and smoke machines. Around the 16th century, obsidian mirrors and divinatory practice were transported from Mexico to Europe, and named scrying, referring to a kind of liminal perception through observations into obsidian mirrors as mediums (Ackermann & Devoy 2012). In Europe, scrying was popularized in occultist practices by the Elizabethan magician, astrologer, and mathematician John Dee (Whitby 1985).

Within the installation Augury visitors have the opportunity to interact with obsidian pieces and a mirror that is connected to mist and smoke machines. By touching fragments and a mirror made of obsidian, visitors activate the release of a fine mist and smoke-like effect in the exhibition room. This visual representation serves as a metaphorical embodiment of the smoking mirror associated with Tezcatlipoca and Scrying. The smoke also creates a sense of atmospheric dynamics inside the exhibition space, where one is able to visualize its turbulence as a dense cloud. As an extra visual element to the smoke, LED lights create flickering effects such as thunder-light, and other kinds of lightning moods which accompany the other multisensorial elements in the installation.

The inclusion of these elements not only introduces a situated way of knowledge of Aztec deity and their association with divination but also creates an intimate and sensorial experience for the participants. By engaging with the obsidian pieces and the mirror, visitors physically and metaphorically connect with the concept of Tezcatlipoca as the smoking mirror, blurring the boundaries between ancient mythology and contemporary artistic expression through technologies.

While considering the specific cultural background from which scrying emerges, and valuing the animistic tone attributed to symbolic objects to enhance human senses. This divinatory practice is considered a situated type of knowledge of the weather. Moreover, I consider it relevant to look into scrying concerning the notion of Ethnocomputing as a study field proposed by Ron Eglash, interested in reviewing culturally located ways of computing abstraction (Eglash 1999) which provides an understanding of the cultural dimensions of computing in an ample diversity of artistic and cultural artifacts (Tedre and Eglash 2018). Similarly, Cosmotechnics, a notion proposed by the philosopher of technology Yuk Hui, is interested in examining art and science initiatives from specific sociocultural contexts outside Western modernity (Hui 2019), which “could reveal new sensorium beyond the utilitarian manifestations of technology” (Hui 2021).

Touch-based user interaction and sonification

The installation incorporates copper-traced designs on the obsidian mirror and pieces. These traces serve a dual purpose as both decorative elements and functional components. They are connected to a touch sensor system, which allows for visitor interaction. When participants touch the areas on the mirror or pieces where the copper traces are present, the touch sensor system detects the contact and activates specific data sets associated with different days of data collection. This activation triggers the production of different soundscapes and modulates existing audio compositions, resulting in an immersive and interactive auditory environment.

The integration of copper traces and touch sensors not only adds visual appeal to the installation but also facilitates a tactile and engaging experience for participants. By incorporating these elements, the installation encourages active exploration and empowers individuals to have a direct influence on the auditory output, creating a personalized and immersive interaction with the artwork.

In terms of sound generation, the weather data, including wind direction, barometric pressure, and dust-particle measurements, is sourced from the four weather stations. These measurements are averaged and utilized as weighting factors to modulate a collection of wavetable-controlled oscillators, filters, and envelopes programmed in a Pure Data patch that diffuses sound across 4 to 8 speakers. This combination of components collaboratively generates dynamic and generative soundscapes that respond to the atmospheric conditions captured by the weather stations.

The incorporation of touch-based interaction and sonification techniques in this project underscores a kind of embodiment of atmospheric knowledge. This aligns with the artistic methodologies of Pauline Oliveros's Deep Listening, which emphasize the use of integrated technologies to expand human perception and deepen our connection with natural phenomena (Oliveros 2005), much like the focus of this project.

Figure 03.Smoking machine triggered by touch interaction

Dawn Chorus - Sounds from the Sferics

This part of the process is still in development, but its association with the other elements of the artwork is essential. As development progresses, this aspect will contribute to reinforcing the meaning of a more-than-human perception of our atmosphere, which is already suggested by the divinatory practice of Augury.

The phenomenon known as the dawn chorus, which occurs in the upper atmosphere during the early hours of the day. And it is characterized by the emergence of atmospheric natural radio or electromagnetic signals that resemble the melodic singing from a flock of birds. This type of sound is often associated with the transition from night to day. During the dawn hours, the ionosphere changes its electrical properties due to the variation in solar radiation. This, in turn, affects the propagation of electromagnetic waves through the atmosphere (Meredith 2019; Khan 2013).

Conclusions

This essay provides an overview of an ongoing interactive installation that draws inspiration from weather-related sensing and predictive practices. It aims to establish connections between three distinct types of knowledge, fostering a heightened sense of attunement to the ever-changing dynamics of the atmosphere. By combining sensing and predictive practices in this hybrid ensemble, the installation seeks to cultivate a deep and intimate understanding of atmospheric conditions, contrasting with visualizations of weather data that do not necessarily require simultaneous embodiment, situating, and mediating.

The proposed arrangement of objects, interactive systems, and atmospheric processes merges ancestral and modern knowledge of weather. This exploration aims to forge a profound connection with our natural environment by incorporating different approaches to detecting and anticipating weather dynamics: as humans, as technical systems, and through symbolic manifestations of the more-than-human.

Following the initial iteration of this project8, it becomes evident that there is a need to experiment with more reliable wind detection systems and fully implement real-time data interactions using remote weather stations. Additionally, the next stage of development should incorporate the concept of the dawn chorus with the rest of elements, interweaving it with the notion of divinatory practices involving birds and atmospheric processes. Furthermore, exploring recent techniques of artificial prediction based on modeling available weather data would be a valuable addition to the conceptual framework presented.

Figure 04.Video Documentation from exhibition
https://youtu.be/SGafXzLUwSc

Acnkowledgemnts

Augury was produced at the RIXC Centre for New Media Culture with received funding for residential visits from the Nordic-Baltic Mobility Programme for Culture to establish the project “RIXC Art Science Residencies”. Light design in the installation was created by Anton Filatov, and the 3D renderings for promotional materials were made by Rodrigo Cid Velasco.

References

Ackermann, Silke, and Louise Devoy. 2012. The Lord Of The Smoking Mirror: Objects Associated with John Dee in the British Museum. Studies in History and Philosophy of Science Part A 43.3: 539-549.

Driediger-Murphy, Lindsay G. 2019 Roman Republican Augury: Freedom and Control. Oxford University Press.

Eglash, Ron. 1999. African Fractals: Modern computing and indigenous design. New Jersey. Rutgers University Press.

French, Roger. 2005. Ancient Natural History: Histories of Nature. Oxfordshire. Routledge.

Haraway, Donna. 1988. Situated knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Taliaferro. Feminist studies.

Hayles, Katherine, N. 2017. Unthought. Chicago. University of Chicago Press, 2017.

Hansen, Mark BN. 2013. Ubiquitous Sensation: Toward an Atmospheric, Collective, and Microtemporal Model of Media." Throughout: Art and Culture Emerging with Ubiquitous Computing: 63-88. Cambridge, MA: MIT Press.

Hermann, Thomas, and Andy Hunt. 2005. (Guest editors): An Introduction to Interactive Sonification. Berlin. Logos Verlag Berlin GmbH.

Hicks, Frederic. 2008 "Mockeries and Metamorphoses of an Aztec God: Tezcatlipoca,“Lord of the Smoking Mirror”." Journal of Latin American Anthropology 9.2: 486-487. Arlington. American Anthropological Association.

Hui, Yuk. 2021. Art and Cosmotechnics. Minnesota. University of Minnesota Press.

Hui, Yuk. 2019. Recursivity and Contingency. Maryland. Rowman & Littlefield.

Kahn, Douglas. 2013. Earth Sound Earth Signal: Energies and Earth Magnitude in the Arts. Berkeley. University of California Press.

Lehoux, Daryn. 2012 What Did the Romans Know? Chicago. University of Chicago Press.

Meredith, Nigel P. 2019. Turning the Sounds of Space Into Art. Astronomy & Geophysics. 60.2: 2-18. Oxford. Oxford University Press

Oliveros, Pauline. 2005. Deep Listening. A composer's Sound Practice. Bloomington. IUniverse.

Parikka, Jussi. 2015. A Geology of Media. Minnesota. University of Minnesota Press.

Randerson, Janine. 2018. Weather as Medium: Toward a Meteorological Art. MA: MIT Press.

Spence, Lewis. 1925. The Obsidian Religion of Mexico. The Open Court.8: 1. Illinois. Open SIUC.

Tedre, Matti. And Eglash, Ron. 2018. Ethnocomputing. Software Studies a Lexicon. Edited by Matthew Fuller. Cambridge. The MIT Press, 2018.

Whitby, Christopher. 1985. John Dee and Renaissance Scrying. Bulletin of the Society of Renaissance studies 3: 25-36. Open Library. Internet Archive.

Young, Leslie Montague. 1985. The Phoenix of the Western World. Quetzalcoatl and the Sky Religion.: 169-171. Oklahoma. University of Oklahoma Press.

Zielinski, Siegfried. 2008. Deep Time of the Media. Cambridge, MA: MIT Press.

Footnotes

  1. In the Archeological sense of the Deep Time of the Media by Siegfried Zielinski.

  2. In the style that Donna Haraway refers to in her essay Situated Knowledges, considering as heterogeneous accounts of the world or culturally embedded manifestations.

  3. Also referred to by Haraway in the same text, but here I am considering it as non-linguistic, common sense empiricism through one’s sensorium, and mainly as an affective experience.

  4. In this case, I consider technology-based mediations, such as ubiquitous computing systems.

  5. This kind of triad is defined by Jussi Parikka as media-nature-cultures, I replace media for technologies, to align with weather-based or remote sensing and ubiquitous computing.

  6. See Janine Randerson’s Weather as a Medium 2013.

  7. These are also commonly known as LORA modules.

  8. Exhibited at RIXC gallery, in the framework of the Art+Science residency, Riga, Latvia. May-July 2023.

10 Organum Paradoxum/Scalptomorpha: A Sculptural Parasite Plug-in to Hack the Human Anatomical System

Speckert, Marie Lynn1,
1 Tangible Music Lab, University of Art and Design, Linz, Austria
marie.speckert@kunstuni.linz.at

Abstract

Scalptomorpha is a concept that interconnects with the human body, representing an object possessing its own organism that can seamlessly integrate into bodily structures. This objects harnesses and metabolism derived from internal body processes, akin to the functionality of a metabolism. My research revolves around the arrangement of anatomical structures, guided by diverse system theories of organisms and medical concepts. I am particularly interested in exploring the hierarchies and guidelines within the human body and how they may lead to the emergence of novel functions and arrangements within an imaginary organism. In my work, I delve into the examination and investigation of anatomical and organic concepts, exploring their development and functionality within the realm of the digital world. This research delves into the intricate relationship between the physical and the virtual, seeking to uncover new insights into the capabilities and possibilities that arise when anatomical and organic concepts interact with digital environments. Through this exploration, I aim to contribute to our understanding of the intricate complexities of the human body, it's hierarchies, and its potential for transformation in the context of transhumanism.

Keywords

interface, system theories, anatomical structures, fictitious organism, symbiotic relationship, bio art, sound art, digestive system

Introduction

Scalptomorpha is a component of an idea, which connects to the human body. An object with its own organism that fits into a structure, such as the human body. This object takes data from internal body rhythms, which it uses and returns: data processing akin to a digestive system.

My research deals with the arrangement of anatomical structures and is guided by various system theories of organisms and medical concepts. My focus is on hierarchies and guidelines of the body and a resulting new function and arrangement of a fictitious organism. I examine and research anatomical and organic concepts and their development and function in the digital world.

Digital anatomy refers to the study or representation of anatomical structures using digital technologies. It involves the use of computer-based tools, imaging techniques, and virtual simulations to visualize and analyze anatomical features. Digital anatomy enables the exploration and understanding of the human body in a virtual environment, offering interactive and immersive experiences for education, research, medical practice, and other related fields. It can encompass various aspects, such as 3D modeling, virtual dissection, anatomical atlases, and computer-aided visualization of anatomical structures. This has advantages in early diagnostics through machine calculation and orientation in a patient's body. Artificial intelligence and sensors are increasingly becoming important tools in medical research. The body is calculated and defined using the parameters.

Digital anatomy and medicine are related fields that both deal with the study of the human body. Computer-aided Medicine can provide a valuable tool for medical professionals, as it allows for detailed exploration of the body without the need for invasive procedures.

Mankind grows with technology and technology with mankind. This dependency is increasing and represents an indispensable medium for the future. This project is a conceptual artwork that explores the idea of creating a new organ for the human body.

Figure 01.Scalptomorpha 2022

Scalptomorphas are a description of a sculptural body that immediately finds a parasitic organism, place and form of its appearance in symbiosis with the specific representation dispositives of always other media that serve as host bodies.

What could be the relationship between the digital art tool and the human body?

What virtual anatomical understanding could be generated and derived?

Parasites and symbionts in art & science

Based on the observation of parasites, we can recognize interesting approaches to the body system and expand the term "parasite" in several directions and treat it metaphorically in my project.

Biologically parasites use the bodies of other living beings to feed themselves, to live in them, and in doing so, they go through interesting cycles in order to reproduce. They use the bloodstream, the lymphatic system or other pathways in the body to reach certain organs that serve their habitat. The change from one habitat to the other represents a challenge and chance. In addition, the parasite must be able to influence the host enough to keep itself alive. To do this, he must connect to the organ system.

A distinction must be made between the ectoparasite and the endoparasite. An endoparasite is a parasite that resides within the body of its host, such as internal worms or certain bacteria. In contrast, an ectoparasite is a parasite that lives on the external surface of its host, like ticks, lice, or fleas. (Piekarski,1954). Parasite relationships can vary. Some common types include parasitism, where one organism benefits at the expense of the host. Also, there are parasitic structures seen in other complex systems and variants, like in mushrooms, bacteria and viruses. In the field of cultural studies, one also finds parasitic patterns that have been extensively studied and have a profound impact on various aspects of human society. In cultural anthropology, parasites are studied as metaphorical representations of social dynamics, such as exploiting and benefiting individuals or groups at the expense of others. In media and literary studies, the parasite often plays a manipulative character, consuming resources or exercising control. In films, books, and other media, the parasite is often portrayed as a creature that reflects societal fears by wielding power and control over humanity.

A good example is provided by Michel Serres, a renowned philosopher, who saw parasites as pathogens that thrive on and depend on the exploitation of other organisms. He regards them as fundamental to the natural world and challenges traditional notions of hierarchy and symbiosis. Serres argued that the house system, which encompasses institutions and societal structures, can be viewed as a parasitic entity that exploits individuals and resources. He viewed the house system as a complex web of power relations in which certain groups benefit at the expense of others, emphasizing the parasitic nature of social structures. The Hungarian-British author and philosopher Arthur Koestler also explored the concept of parasitism, using the term to describe ideological systems that manipulate and control individuals. He deals with the complex dynamics between power, authority and hierarchies in organisms and bodies. Koestler introduced the concept of "holons" as a unit of analysis. „Holons“ are entities that exhibit both individuality and interdependencies within hierarchical systems. They represent integrated and autonomous components of a larger whole.

In the artistic field the parasite is taken up in many areas and presented as a mirror of social structures. Matthew Barney provides an example in his work "Cremaster" series. Among others, Barney explores the symbiotic relationships between host and parasite, blurring the boundaries between them. Through his installations, films and sculptures, he shows the interplay of power dynamics, dependencies and complex networks that exist within biological and cultural systems. His artistic exploration of these themes invites viewers to reflect on the transformative potential of the parasitic relationships and structures of human anatomy. Additionally, there can be metaphorical or symbolic interpretations of parasitic relationships in human society. Looking in the direction of human relationship patterns, we discover many parallels that are similar to the biological parasite on a psychological level. Similarly, in the animal world, there are symbiotic and parasitic relationships. A guest that becomes useful to a host is called a „symbiote“. A symbiosis is understood in the USA as any form of coexistence of different organisms. In Europe, symbiosis is a mutually beneficial relationship between two species. Other synonyms for symbiosis are "living together" or "community". Commensalism (stowaways) is understood to mean, among other things, animals that live in the organism of another individual (host) and are positive for members of one species and neutral for the other species. An example are the intestinal fish (carapids). However, unlike “parasitism”, they do not harm their host. (Piekarski, 1954). In deep-sea anglerfish, males and females fuse to the point where the skin and bloodstream of the mate grow together — this is called sexual parasitism, and it's a form of anatomical connection more related to transplantation. The male fuses with the female's tissues and is henceforth unable to feed himself, instead being nourished by the female's bloodstream, much like embryos in the mammalian uterus. (Swann, 2020). Mutuality (e.g. bees and flowers) often describes the symbiosis of two communities in general, in which both partners benefit from each other and kleptoparasitism (e.g. cuckoo), where a parasite steals resources from another organism. (Weitsicht, 2009).

Related works

The artist Ken Rinaldo presents in his installations symbiotic relationships. His work is often about creating interactive ecosystems in which technological elements coexist with living organisms, thus promoting a symbiotic connection. In doing so, Rinaldo examined the interdependence and harmony between man-made technologies and the natural world. A striking example of his work is his work "Augmented Fish Reality". In this project, Rinaldo created an interactive environment in which robotic fish and live fish coexist. The sensor-equipped robotic fish react to the movements and behaviors of the living fish, creating a symbiotic interaction that blurs the lines between the artificial and the organic. The aim of the installation is to think about the networking of all forms of life and the potential of harmonious coexistence between humans, technology and nature.

The artist duo “Art Orienté Objet” deals with symbiotic relationships between humans and animals. In their project "May the Horse Live in Me“, the artist Marion Laval-Jeantet was injected with horse blood plasma into her own body to contextualize the blurring of boundaries between species and the exploration of themes of symbiosis, identity and connection between species. For that performance she wears dons hoof, digitigrade leg extensions and walked around the room in step with a horse.

Australian artist Stelarc is best known for his exploration of the human body and technology. In some of his later works he incorporated the concept of symbiotic relationships. An example is his project Parasite: Event for Invaded and Involuntary Body. In this performance, Stelarc attached a robotic arm to his body, creating a symbiotic relationship between his biological body and the technological augmentation. This work questions the limitations of the human body and explores the potential for harmonious coexistence between the organic and the artificial.

Ad Infinitum shows a stand-alone and interactive installation where visitors can feed the machine with energy. This artwork is a parasitical entity which generates the energy from humans to get alive. Once the visitor's arm is in the machine, the arm is held, muscles stimulated to perform a cranking motion. Our kinetic energy is thus supplied to it. The only way to free a visitor is to trick another visitor into sitting in the opposite chair and taking his place. This project reflected what it meant to be „used“ by a machine.

Scalptomorphas

I took the initiative to create three Scalptomorphas, each endowed with distinct personalities.

Foraredina

Figure 03.Foraredina 2022

Foraredina is an intrusive character that increase conductivity perspiration to create conductivity in the body. By stimulating skin regions and underlying muscles, it triggers tension, twitches, and sweat production. The parasite exploits the host's autonomic nervous system, manipulating sweat glands and blood flow to navigate and alleviate muscle pain. Additionally, Foraredina has symbolic potential, aiding with skin issues, muscle relaxation, and disease detection by promoting sweating for cleansing and complexion improvement. It scans the region for veins and at the same time provides relief from muscle pain.

Prosoma Crani

Figure 04.Prosoma Crani 2023

Prosoma Crani is a symbiote that attaches to the host's head, utilizing sonic to access the body and control sensory perception. Its tentacles manipulate balance and proprioception. By generating deep frequented sonic, it gains control over the host, influencing actions and sensory experience. In return, it provides a stimulating alternative sound level that resonates within the bones.

The symbiont achieves control over the host to control and control any actions, perception, or movements on its behalf. Its goal is to set the host in motion in order to produce optimal bone conduction and to influence the sensory perception of sound. He uses bone conduction to create a map of the body. He seeks out cavities, pathways and architectural constructs in order to gain access to the interior and the cycle.

Actus Tick

Figure 05.Actus Tick 2023

Actus Tick is a superficial parasite that relies on the host's vital signs and dynamics. Once attached to the host's arm, it adapts to the circulatory system's rhythm using its mouths and pulsating body. Feeding causes it to inflate and grow. If sustained long enough or through host switching, growth is encouraged. However, without the host's circulation, Actus Tick cannot survive. When oversaturated, it gets too heavy and falls off, needing to break down food before finding another host.

The Scalptomorpha's are attached to the human body and gain access to the organ system by measuring and analyzing medical parameters that monitor the body. Manipulations are triggered by the Scalptomorpha's to get at their food sources. I'm talking about food in terms of information in the body that defines the state and distribution of resources. Palpable values provide this information and can be influenced and changed by external influences. These influences can have positive, but also negative, emotional effects on the wearer. A dependency develops in the focus, which is mutually supported by the wearer and the object, since they benefit from each other. A new way of life develops as a result.

Coming back to the idea of the objects functioning as some kind of parasite, one possible trigger of certain parameters would be "hacking" the body. The sustaining organs in the body, such as the liver, lungs, heart, etc. communicate with each other and send signals, impulses and information to the brain. The parasite settles mostly in organs and feeds and lives in it as a habitat. As already described, this can have a symbiotic, positive effect on the host, or a negative, harmful one - but mostly manipulative.

This information from certain organs could trigger the object to send signals to other parts of the body. One idea would be to use the fascia system, which is a connective tissue that surrounds and interconnects various structures within the body, including muscles, bones, organs, and nerves. It forms a continuous network of fibers that provide support, protection, and stability to the body. Fascia also plays a role in transmitting mechanical forces and coordinating movements between different parts of the body (C.Stecco, 2015). It would be conceivable to tap into these endogenous signals that are sent to the object and which processes them and makes them feelable to the body. The objects can generate stimuli with the processed information and e.g. emit sound to the body area. This creates a cycle that docks with the body's own rhythms.

The simulated, non-organic objects have something corporeal and are reminiscent of something organic. They create an association with biological models that represent anatomical structures in an abstract way. They have their own organism and can represent the collected values (food) in the form of e.g. sound, which in turn influences the organic body.

Conclusion

As a result, the Scalptomorphas also develop a relationship with humans, which brings advantages and disadvantages. Hence the association with the parasite. Thereby, too, it is unclear who the parasite is in the end - human or Scalptomorpha. After all, well-being and the preservation of life are crucial for people. For this reason, the Scalptomorpha is considered an indispensable tool, provided it fulfills this task. The Scalptomorpha, on the other hand, draws up a map of the body and creates a digital overview. It processes the collected benchmarks and converts them into triggers, which in turn are defined. This creates an anatomical image, which for example, could be displayed.

However, one should keep in mind that the technological approach is clearly different from that of the living parasite. Their abilities give them the potential to scan and define the body. The numbers, measured values and impulse generators, which represent a virtual form of anatomy, play a role here. The abstraction creates an association of an anatomical image and depicts the body as a purely mechanical construct. The condition of the body can thus be diagnosed and kept alive. The idea behind it is to create a new kind of symbiotic relationship between the human body and technology, in which the machine becomes an integral part of the body's functions.

The fluctuation between abstraction and concreteness is an artistic medium. Taking the idea further, it is interesting to hack a concrete source of information within the body that surrounds and connects all organs. With digitization, a modern anatomical image can be analyzed and designed, which redefines the roles and hierarchies in the body. The form of representation also gets a new focus, which I would like to implement artistically in the future.

References

Bell, Shannon. 2017. Stelarc: Performing the Posthuman Routledge.

Combes Claude, Simberloff, Daniel. 2005. The Art of Being a Parasite, Chicago

Grochowska, Izabela. 2018 Stelarc: Obsolete Body Suspensions and Robotic Prosthetics. Routledge.

Hasso-Plattner-Institut. 2016. „Ad Infinitum: a parasite that lives off human energy“, https://hpi.de/baudisch/projects/ad-infinitum.html

Henneberg, Maciej and Fenney, John. 2002. Stelarc: The Cyborg Experiments

Hille, Christiane. Stenzel, Julia. 2014. CREMASTER ANATOMIES Beiträge zu Matthew Barneys CREMASTER Cylce aus den Wissenschaften von Kunst, Theater und Literatur. Bielefeld

Koestler, Arthur. 1967. Ghost in the machine. UK

Piekarski, Gerhard. 1954. Lehrbuch der Parasitologie: Unter Besonderer Berücksichtigung der Parasiten, S. 7+13. Heidelberg

Rinaldo, Ken. „Augmented Fish Reality“. https://www.kenrinaldo.com/portfolio/augmented-fish-reality/

Serres, Michel. 1987. Parasite. Paris

Smith, Marquard and Pierce, Julianne. 2005. Stelarc: The Monograph by MIT Press, Cambridge, Massachusetts.

Stecco, Carla. 2015. Functional Atlas of the Human Fascial System. Churchill Livingstone.

Swann, Jeremy B. 2020. Science. „The immunogenetics of sexual parasitism“, https://www.science.org/doi/10.1126/science.aaz9445

Weitschat, Wolfgang. 2009. Jäger, Gejagte, Parasiten und Blinde Passagiere – Momentaufnahmen aus dem Bernsteinwald S. 251.

The Artist Bestiary. 2013. „ART ORIENTE OBJET: PERFORMANCES ABOUT TRANSFORMATION“, 2013.

https://artistbestiary.wordpress.com/2013/08/09/art-oriente-objet-performances-about-transformation/

CLOT. 2015.„ART ORIENTÉ OBJET, blurring the constraints of our relationship with animals“. https://clotmag.com/biomedia/art-oriente-objet

11 A Plague in Cyberspace: The Importance of Being-on-Line

Ruby Thelot 1,
1 Parsons School of Design, New York, USA,
rubythelot@gmail.com

Abstract

This paper is an investigation of the intersection of memory and Being, and how they are affected by technology. Specifically, I tackle the replicability of digital artifacts and the non-transference of their memory. To exemplify the failure of transference, the paper leans on anthropological concepts in order to understand how memories may be shared in a cross-cultural context. It utilizes Eduardo Viveiros de Castro’s perspectivism concept in order to better understand how a memory created in a digital realm cannot be understood by an outsider because one had to be-there to comprehend. Similarly, the paper affirms that the reality of a digital reminiscence may only be comprehended through being-on-line—by being in that particular space and time.

Keywords

Being-on-line, Digital Ontology, Digital Perspectivism, Ontological Turn, Corrupted Blood Event, World of Warcraft.

You had to be there!
You have to be-on-line!

Digital artifacts occupy a grey ontological area. They eschew the restraints of physicality through a set of unique affordances. Kallinikos (Kallinkos, Aaltonen and Marton 2013) details the characteristics of this ambivalent ontology as editability, the capacity for the artifact to be altered after creation, interactivity, the affordance for user-chosen contingent actions, and distributedness, the ability for the artifact to be present in multiple systems at once. The distributedness is adjacent to the concept of replicability. In digitally systems, most files unless protected can be duplicated ad infinitum. The promise of lossless digital media is eternal replication without loss. We point to these affordances as antipodal to physical goods, who by the nature of their material existence are scarce. If I have this apple, you cannot have it too. This fact has led to a romanticization of digital space as being the realms of abundance, where the restrictions of physical space evanesce. But there is loss, loss of experience. As experiences or subsequent accounts of experiences are replicated across the internet, the true essence of the experience is lost, only claimable by those who were on-line.

This research essay seeks to explicate the interplay between simultaneous presence during a digital event and the impossibility of reproduction of the event’s memory, thereby affirming the importance of being-on-line.

Firstly, there is no true dualism (Jurgenson 2011), by which I mean no separation. Like the physical apparatus that engenders it, the phenomenological experience of the digital artifact is still tethered to the material, just as is the whole digital apparatus. For instance, I write to you from a desk, located in the middle room of my Queens apartment. It is an office of some sorts. This is the main locus or access point to my online realms. Its walls, the lighting, the temperature are all part of the interface which I use to access the Online. The room is my portal to the Online sites and networks of which I am a member.

I was influenced by David Rudnick’s concept of the “digital-prime”, presented in the 2021 “Primacism” episode of the Interdependence podcast (Rudnick 2021). “Digital-prime” means an experience that occurs first and foremost online. Events such as Travis Scott’s live concert in Fortnite, where he appeared as a giant avatar and sang songs to a crowd of millions, the romantic relationship sparked in a massively multiplayer online role playing game’s (MMORPG) dungeon or the Corrupted Blood event in World of Warcraft, the eponymous Plague in Cyberspace, all represent incidences of the digital-prime.

The Corrupted Blood event, specifically, was a deadly plague that affected World of Warcraft players in 2005, after the release of patch 1.7.0. It was caused by a glitch which enabled an infectious debuff, an effect or temporary curse that harms its target, to be transmitted outside the dungeon where it should have been contained. The debuff in question was both extremely powerful, inflicting from 250-300 damage in health points, and extremely virulent, it could be transmitted to pets and NPCs. Low level players afflicted with it would die in a matter of seconds. This event was seminal in the history of World of Warcraft and its almost 5 million players. It has made its way to the pantheon of internet lore.

The title “A Plague in Cyberspace” is also a reference to the seminal 1993 essay “A Rape in Cyberspace” by Julian Dibbelll from the beginning of virtual worlds, which presents an egregious account of rape in a mediated environment. The crime was perpetrated by a character named “Dr. Bungle” in LambdaMOO, a Multi-User Dungeon or MUD. Using a “voodoo doll” program, Dr. Bungle assigned actions to other characters against their consent, many of which were sexual. The essay recounts the trauma suffered by the members of the MUD who had been violated. One member even recounted “post-traumatic tears were streaming down her face”. The rape like the plague exemplifies that the digital-prime experience and its memory are not circumscribed in one realm, rather they are astride the hybrid created by the mixed-realm experience.

This concept of digital-prime exists on a Escherian spectrum with “physical prime” at another end. For the last 20,000 years of human civilization, we have built cultures in a physical-prime fashion. As humans expanded their reach on Earth, they built physical markers of their culture, rites, and rituals. The cave paintings of Lascaux can be understood as the first artistic expression of that culture and that expression lived in our shared physical ritual. The new culture we build will certainly include hybrid memories, if not pure digital-prime ones.

The Internet is built on the lore of the digital-prime, a series of events occurring within non-material realms. In 1996, the Lavender Town Syndrome spread across Japan. It was a rumored curse that afflicted children who visited Lavender Town in Pokemon Red and Pokemon Blue and listened to its eerie soundtrack. The children allegedly soon after took their own lives. Stories of the 200-or-so suicides that the score had engendered became very popular on the site Creepypasta around 2010, the same time when I used to browse the page.

In 2009, SlenderMan made his first appearance on the Something Awful forum. The legend haunted innumerable teenage minds and I spend countless nights spent browsing fora, where people detailed at great lengths their experiences with the creature. There was also the MothMan’s tale and the blurry images anons would share of their encounters. I could visualize the detail with which they described the matters. To this day, I can’t escape the shiver-inducing reminiscence of the Goatman stories I read. These stories exemplify digital lore and myths anchored in the digital-prime, made for and by the internet, still they can not escape their tether, they are inextricably linked to the physical domain which surrounded their experience.

In my foray through memory, I couldn’t help but to see myself sitting down on the pseudo-wood desk and the black plastic chair. This led me to the physical locus of web-surfing or going-online, the material environment of these experience: the home computer. The home computer in the early aughts was the point of entry for cyber-active teens. It was often in a shared space, overlooked by parents, siblings, or whoever else entered the home. It was a shared device—given that at its cost, most families could only afford one, if any.

Additionally, there was a shared experience which cannot be overstated in the simultaneity of these events. This simultaneity was expressed through the common reading of texts, blog posts and images. It was a factor of the real-time technology in the last case which enabled synchronous play but also of the Web 2.0 infrastructures which facilitated communication between users spread around the globe. In a sense, what users who remember these events share is lore or a body of knowledge on a subject which is gained through experience or passed to person by word of mouth or in the digital context through text and secondary accounts such as videos. But what happens when the digital-prime experience is subsumed by subsequent transmissions? When lore is no longer experienced? The stories above are told through fora and threads in subreddits, the immersive is flattened into text.

To answer this, I have leaned on anthropological concepts in order to understand how memories may be shared in a cross-cultural context. Digital realms represent fully fledged cultures and communities. Utilizing Eduardo Viveiros de Castro’s perspectivism concept (Castro 1998) in order to better understand how, akin to how the perspective of Western observers diverges drastically from that of Amazonian tribes, a memory created in a digital realm cannot be understood by an outsider because one had to be there to understand. The phenomenology of digital memory is thus anchored in its lived experience, with being as a condition for comprehension. This approach is called the ontological turn. It refers to a centering of anthropological research on the ontological idea of being-in-the-world, a concept inherited from German philosopher Martin Heidegger. Similarly, the reality of a digital-based reminiscence may only be comprehended through being-on-line—by being in that particular space and time when the plague occurred.

Furthermore, a purely literary transmission of the memory tends to obscure the aforementioned physical environment in which the memory was experienced. The whirs of the 2008 PC, the sounds of the modem, the taste of the dust blown by the hyperactive computer fans. Even the digital experience has physicality.

In sum, this paper began by exploring the material realities of digital memories. My initial thesis proposes that lore transmission cannot occur after the fact, and that by re-presenting the object and its contents, one cannot rekindle bygone souvenirs of yester-internets. The tension remains, and the transmission fails. Here, the digital mirrors the physical, where in spite of its heightened replicability and the capability for ubiquity in virtual spaces, experiences are not replicable outside their loci of origin. This leads me to affirm that no, no one cannot engage with it outside its circumscribed space and time.

You had to be there!
You have to be-on-line!

References

Castro, Eduardo Viveiros de. 1998. "Cosmological Deixis and Amerindian Perspectivism." The Journal of the Royal Anthropological Institute 469-488.

Jurgenson, Nathan. 2011. "Digital Dualism versus Augmented Reality." Cyborgology. February 24. Accessed June 21, 2023. https://thesocietypages.org/cyborgology/2011/02/24/digital-dualism-versus-augmented-reality/.

Kallinkos, Jannis, Aleksi Aaltonen, and Attila Marton. 2013. "The Ambivalent Ontology of Digital Artifacts." MIS Quarterly 14.

Rudnick, David, interview by Holly Herndon and Mat Dryhurst. 2021. Primacism (March 1).

12 A Cocreative Computational Approach to Musical Analogy

Nuno Trocado 12,
1 CEIS20 – Center for Interdisciplinary Studies, University of Coimbra, Portugal
2 Faculty of Arts and Humanities, University of Coimbra, Portugal
nuno@nunotrocado.com

Abstract

Analogy is a critical cognitive process, at the core of the multiple ways in which we think in and through music. With structure-mapping theory as a point of departure, I describe how a computational implementation of its theoretical tenets may frame an approach to musical analogy which, in a cross-domain or music-to-music generative setup, would amount to a novel variation of concatenative synthesis, but driven preferably by higher-order relational structures instead of by the mere similarity of feature vectors.

Keywords

Analogy, Music, Synthesis, Computational creativity

Description and Goals

In this paper, I describe a hypothetical technological framework for sound synthesis, which works as the other half of a human-machine cocreative system, grounded at least in part on a simulation of the cognitive capacity for analogy-making. My goal is not to design a system that generates music integrally or autonomously—i.e., where the machine appears to act by its own accord, producing finished or almost finished works or segments of works—but to better understand analogy and how it can be computationally exploited to foster and enhance human creativity, in particular in the context of my own artistic practice and the aesthetic values that it entails.

Analogy at the Core of Musical Thought

Through analogy we compare things that are different, but share relevant commonalities, allowing us to project cognitively resonant structures between them and gain new functional insights. As the “fuel and fire of thinking” (Hofstadter and Sander 2013), analogy is central to a wide range of human abilities, ubiquitous in everyday thought, and determinant for our worldly experience. It is thus unsurprising that analogy shows up prominently in music. Participating in the musical phenomenon involves the cognition of sound-pattern formation and the mapping of gestural-temporal processes, which shape how we make, listen, think in or about music, feel and move through it, individually and collectively. Knowledge from a variety of domains is carried over, or projected, into sound, thus constituting our musical experience. Such projection is the characteristic mark of analogy.

Notwithstanding the centrality of analogical processes, musical studies have only recently started to examine their conceptual implications. Some accounts focus on correspondences through recurrence “within music,” such as thematic/formal roles (Bourne 2015), where the schematic repetition or transformation of a pattern in successive musical passages gives shape to “chains of analogy” (Kielian-Gilbert 1990). As analogical comparisons drive the processes of conceptualization and abstraction, analogy lies at the core, for example, of the concept of motives, through which we understand each new instance of a musical pattern by comparing it with other ones that we previously encountered, noting their shared structure despite the superficial dissimilarities. Analogy is also implicated in the cognition of metric groupings, or the exposition and recapitulation of a sonata form, or for that matter in the chorus/verse recurrence of a pop song. Other accounts focus on structural commonalities between different parameters, e.g., pitch and time (Bar-Yosef 2007; Eitan and Granot 2007). Furthermore, the suggestion that music works fundamentally as a “sonic analog for dynamic processes” (Zbikowski 2017) connects music with emotion, gesture, dance, or words. These connections mount to the view that our conceptual system is prominently metaphoric, i.e. constrained by the features of the human body and worldly experience (Lakoff and Johnson 1980), and structured by sensorimotor schematic patterns (Johnson 2007). Such a perspective, together with the analogy-like theory of conceptual blending (Fauconnier and Turner 2002), prompted emergent frameworks on how music is conceptualized (Brower 2000; Hatten 1995; Larson 2012; Saslaw 1996; Spitzer 2004; Zbikowski 2002).

Since, in these cases, music appears to be “standing for” a distinct reality, Zbikowski (2017) highlights that music makes use of a unique form of reference—“analogical reference”—which can be understood in terms of Peircean semiotics, and particularly in terms of the concept of the icon. Zbikowski observes that, as Peirce divided the icon into image, diagram, and metaphor, sonic analogs can be traced in a continuum between those categories, where sounds that more clearly mimic an actual audible event are positioned closer to image, and the sonic analogs for nonsonic dynamic processes closer to metaphor. Symbolic reference, by contrast, while predominant in language, is residual in music, being relegated to instances where a musical utterance is conventionally correlated with a specific referent, as it notably happens with the culturally shared associations that constitute the object of topic theory.

Structure-Mapping and Analogy in Artificial Intelligence

The computational modeling of analogy enjoys a rich history. Understandably, if the capacity for analogy is such a critical mark of intelligence, it follows that it must be somehow introduced in artificial intelligence systems. On the other hand, research on analogy as a cognitive process arose and was developed contemporaneously with the general perspective positing that human reasoning can be understood through its implementation in computer programs. The technical approaches for artificial analogy-making have followed the broader trends in the field of artificial intelligence, from older (but still promising) symbolic methods, which are based on the manipulation of symbols representing the knowledge for the base and target domains, to more recent deep learning techniques, as well as hybrid architectures. Reviewing several of these approaches, Mitchell (2021) concludes that despite the extensive efforts, which remain as active as ever, “no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies,” while at the same time, such advances will be key for continued progress going forth from current state-of-the-art artificial intelligence models.

According to structure-mapping theory (Gentner 1983, 1989; Gentner and Smith 2013; Gentner et al. 2001), developed for the last four decades and now in some ways the classic, empirically validated framework for analogy, we are biased toward mapping relational structures, and preferably systems of mutually connected higher-order relations, and not so much object properties or attributes—this preference is called the systematicity principle. This is why we find the analogy between a house and a nest (same functional relationships) more compelling than the one between a planet and a ball (same shape)—see Figure 1.

Figure 01.The house/nest analogy is more compelling than the ball/planet one.

Through this distinction between relations and properties or attributes, it’s possible to contrast (as in a continuum rather than rigorously separated categories) analogy with other types of domain comparison (see Table 1). In literal similarity the mapping includes a large number of both object attributes and relationships. Mere appearance hinges on common attributes, but not relations. In abstraction, as in analogy, there are few attributes mapped to the target, but the base domain is already an abstract relational structure, which has few (or none) object attributes to begin with. Finally, a comparison presenting neither attribute nor relational overlap is an anomaly.

Table 01.Analogy among other types of domain comparison. Reproduced from Gentner (1989).
Attributes Relations Example
Literal similarity Many Many Milk is like water
Analogy Few Many Heat is like water
Abstraction Few Many Heat flow is a through-variable
Anomaly Few Few Coffee is like the solar system
Mere appearance Many Few The glass tabletop gleamed like water

The Structure-Mapping Engine, SME (see Forbus et al. 2016 for current iteration), is a computational implementation of the structure-mapping theory. Like comparable systems, it follows significant assumptions: that analogy is domain-general, its mechanism is purely syntactic and not constrained by the specific perceptual modes involved in the process; and that therefore it is independent of the way through which knowledge is structured in the base and target domains. Analogy is thus described as a neutral mechanism, operating in the same fundamental way between domains like water and heat, as between two sets of different geometric drawings, or, say, between sound and the kinesthetic patterns evidenced in dancing. This means that, as a different and previous step to the analogy proper, there is the need for construing explicit domain-knowledge representations, in particular representations that go beyond flat feature vectors and capture nth-order relational structures—i.e., that designate the relations (and relations between relations) drawing up the structural constituents of the domain. These representations, however, don’t have to be hand-coded and can be automatically generated or derived from perceptual input.

Computational models such as the SME are in a sense disembodied, but it can be argued that they remain compatible with a connection tracing back domain knowledge to its roots in modality-specific, sensorimotor representations. Furthermore, the encoding and matching modules, while independent, can be interleaved, feeding back into each other—this process mirrors the empirical observation that high-level cognitive processes penetrate into and affect the operation of perceptual systems. Additionally, human intelligence and creativity may indeed be impossible to simulate on a full-scale model, or even in a less-ambitious imperfect simulacrum, as they are dynamically contingent on the features and history of the body, intertwined with environmental factors, and dependent on the specific, more or less unpredictable goals pursued by the agent. But, even if machines don’t possess these things, simulated outcomes remain pragmatically useful, either as a heuristic—furthering partial accomplishments and a more profound understanding of human cognition—or, in the sense that most concerns this endeavor, as an aesthetically valuable instrument for artistic practice, that retains a solid connection to the psychology of musical experience.

Audio Synthesis Through Analogy-Driven Mappings

The idea of applying cognitively-resonant domain-general computational models of analogy to music, or integrating implementations such as the SME to sound generation tasks, remains largely unexplored. Some tentative music-related approaches (Eppe et al. 2018; Zacharakis et al. 2021) have instead followed the conceptual blending framework (Fauconnier and Turner 2002), which describes a very similar high-level cognitive process where elements and relations from two or more domains are compared, but conceptualizing their combination as a fusion (blend) into a new integrated mental space. The integration network model is meticulously specified and, as it is apt for formalization, has been an attractive framework for computational approaches. On the other hand, conceptual blending is targeted to the creation of hybrids, and thus it’s less flexible than more general models of analogy.

Otherwise, I find the tenets of structure-mapping particularly apt for the domain of music. Music is highly relational, at the very least because of its intrinsic temporality. What is the value of a single sound event, if it’s not taken in relationship with past and future ones? Besides, higher-order relations are manifest in the pervasiveness of conceptualizations that organize sound in hierarchies, processual configurations, or cause-effect chains.

In this context, I envision a representation of the sound domain that proceeds from segmenting an audio stream into multiple very short sonic tokens, quantified according to music information retrieval metrics, which are correlated temporally with image schematic patterns such as containment, source-path-goal, interruption, self-similarity, or pendulum. Such information is then stored in a corpus database. From here, the SME probes and acts upon hypothetical cross- or intra-domain mappings. Cross-domain mappings are made possible by having the non-sonic domain categorized through the same common image schemas. Music-to-music mappings would have another audio stream as the base for the analogy.

The creation of new sounds is accomplished through a kind of concatenative synthesis (Schwarz 2004)—a method of generating audio by selecting and assembling small sonic units from a large database of sound sources. Typically, the selection and assemblage are performed by attempting to match quantitative physical, perceptual, or statistical features (e.g., pitch, spectral centroid, average amplitude, tempo) of the sources. Such feature-matching depends on a specification of criteria for the similarity between sonic units. In the various kinds of domain comparison that were contrasted above, this kind of similarity would approximate “mere appearance,” since it deals predominantly with collections of object properties. In the analogy-driven setup that I propose, however, mappings would be established not according to the similarity of surface features, but according to the degree of isomorphism in relational structures.

Thus, mapped sonic units would not necessarily sound similar; instead, the resulting audio stream would exhibit deeper structural commonalities perceived as convincing, compelling, and surprising, despite the superficial differences—just like the analogies that we rely upon in our day-to-day life.

In conclusion, I believe that this strategy leads to a machine-generating but human-steerable framework for producing novel timbres and sonic textures. One that, by being grounded in the cognitive capacity for analogy, exhibits a degree of creativity still lacking in artificial intelligence systems, and whose expected glitches, non-linearities, and incoherences could be artistically useful in music-making.

References

Bar-Yosef, Amatzia. 2007. “A Cross-Cultural Structural Analogy Between Pitch and Time Organizations.” Music Perception 24 (3): 265–80. https://doi.org/10.1525/mp.2007.24.3.265.

Bourne, Janet. 2015. “A Theory of Analogy for Musical Sense-Making and Categorization: Understanding Musical Jabberwocky.” PhD thesis, Northwestern University.

Brower, Candace. 2000. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44 (2): 323. https://doi.org/10.2307/3090681.

Eitan, Zohar, and Roni Y. Granot. 2007. “Intensity Changes and Perceived Similarity: Inter-Parametric Analogies.” Musicae Scientiae 11 (1): 39–75. https://doi.org/10.1177/1029864907011001031.

Eppe, Manfred, Ewen Maclean, Roberto Confalonieri, Oliver Kutz, Marco Schorlemmer, Enric Plaza, and Kai-Uwe Kühnberger. 2018. “A Computational Framework for Conceptual Blending.” Artificial Intelligence 256: 105–29. https://doi.org/10.1016/j.artint.2017.11.005.

Fauconnier, Gilles, and Mark Turner. 2002. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic Books.

Forbus, Kenneth D., Ronald W. Ferguson, Andrew Lovett, and Dedre Gentner. 2016. “Extending SME to Handle Large-Scale Cognitive Modeling.” Cognitive Science 41 (5): 1152–1201. https://doi.org/10.1111/cogs.12377.

Gentner, Dedre. 1983. “Structure-Mapping: A Theoretical Framework for Analogy.” Cognitive Science 7 (2): 155–70. https://doi.org/10.1207/s15516709cog0702_3.

———. 1989. “The Mechanisms of Analogical Learning.” In Similarity and Analogical Reasoning, edited by S. Vosniadou, and A. Ortony, 199–241. Cambridge, MA: Cambridge University Press.

Gentner, Dedre, Brian F. Bowdle, Phillip Wolff, and Consuelo Boronat. 2001. “Metaphor Is Like Analogy.” In The Analogical Mind: Perspectives from Cognitive Science, edited by D. Gentner, K. J. Holyoak, and B. N. Kokinov, 199–253. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/1251.003.0010.

Gentner, Dedre, and Linsey A. Smith. 2013. “Analogical Learning and Reasoning.” In The Oxford Handbook of Cognitive Psychology, edited by Daniel Reisberg. New York: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780195376746.013.0042.

Hatten, Robert S. 1995. “Metaphor in Music.” In Musical Signification, edited by Eero Tarasti, 373–92. De Gruyter Mouton. https://doi.org/10.1515/9783110885187.373.

Hofstadter, Douglas, and Emmanuel Sander. 2013. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. New York: Basic Books.

Johnson, Mark. 2007. The Meaning of the Body. Chicago: University of Chicago Press.

Kielian-Gilbert, Marianne. 1990. “Interpreting Musical Analogy: From Rhetorical Device to Perceptual Process.” Music Perception 8 (1): 63–94. https://doi.org/10.2307/40285486.

Lakoff, George, and Mark Johnson. 1980. Metaphors We Live by. Chicago: University of Chicago Press.

Larson, Steve. 2012. Musical Forces: Motion, Metaphor, and Meaning in Music. Bloomington, MN: Indiana University Press.

Mitchell, Melanie. 2021. “Abstraction and Analogy-Making in Artificial Intelligence.” Annals of the New York Academy of Sciences 1505 (1): 79–101. https://doi.org/10.1111/nyas.14619.

Saslaw, Janna. 1996. “Forces, Containers, and Paths: The Role of Body-Derived Image Schemas in the Conceptualization of Music.” Journal of Music Theory 40 (2): 217. https://doi.org/10.2307/843889.

Schwarz, Diemo. 2004. “Data-Driven Concatenative Sound Synthesis.” PhD thesis, Université Paris 6 – Pierre et Marie Curie. http://recherche.ircam.fr/equipes/analyse-synthese/schwarz/thesis/.

Spitzer, Michael. 2004. Metaphor and Musical Thought. Chicago: University of Chicago Press.

Zacharakis, Asterios, Maximos Kaliakatsos-Papakostas, Stamatia Kalaitzidou, and Emilios Cambouropoulos. 2021. “Evaluating Human-Computer Co-Creative Processes in Music: A Case Study on the Chameleon Melodic Harmonizer.” Frontiers in Psychology 12. https://doi.org/10.3389/fpsyg.2021.603752.

Zbikowski, Lawrence M. 2002. Conceptualizing Music: Cognitive Structure, Theory, and Analysis. Oxford: Oxford University Press.

———. 2017. Foundations of Musical Grammar. New York: Oxford University Press. https://doi.org/10.1093/oso/9780190653637.001.0001.

13 Creating with Marine Fish: Interspecies Architecture as a Communication Tool

Anja Wegner 1,
1 Max Planck Institute of Animal Behaviour, Konstanz, Germany
awegner@ab.mpg.de

Abstract

This PhD project explores a transdisciplinary approach to engage with marine fish through the process of co-creating architecture and ecological niches. In addition to the exploration of new modes of perceiving and interacting with the living marine world, this project investigates if we can co-design architectural elements that could be used to build for humans before eventually being engulfed by the rising sea and becoming a habitat for the co-designers, the sea creatures. Fish and Interspecies Architecture are concepts and physical media to communicate and engage with the Mediterranean damselfish Chromis chromis. In collaboration with SUPERFLEX and Chromis chromis over the past two years, we designed ecological structures to offer a space for reproduction and interspecies encounters. With their behaviours and bodies, the fish used and thereby curated the physical structures. Besides the analysis of the fish behaviours, the structures facilitate the creation of a human-fish kinship in which both species shape each other’s (evolutionary) history. As an evolution of Fish Architecture, which focuses on the fish and their ecological needs, I introduce the idea of Interspecies Architecture, which would return the structures curated by the Chromis to the human to understand how both the aquatic and terrestrial species interact with the architectural elements.

Keywords

Marine biology, Architecture, Interspecies Communication, Art-Science, Transdisciplinarity.

Living with the 6th mass extinction

Amid the climate crisis, humanity must reconsider its relevance within the global ecological network. Until today 40 countries worldwide have declared climate emergency, and scientists continue urging governments to commit and follow the climate emergency declaration, which calls for the restoration of nature and the protection of pristine ecosystems (CEDAMIA 2023; Wang et al. 2011). However, pristine nature is a human concept that merely depicts the unrealistic romantic idea of "untouched nature". Restoration and conservation treat symptoms of humanity's damaged relationship with the rest of the living world but do not dismantle the Western narrative of the nature-human dichotomy causing the decline of species and ecosystems. Eventually, Western scholars started to theorise new modes of coexistence with non-human entities, acknowledging their agency and accepting ecological interconnectedness (Bennett 2010; Morton 2019). They even emphasise the sym-poiesis ("making with") of all earthlings, who are never alone but part of a complex and dynamic holobiont (Haraway 2017).

Following those concepts, the question "What would Animals say if we asked the right questions?" (Despret 2016) naturally comes to mind. In her analysis of this question, Despret dissects scientific knowledge and experimentation with animals to depict learnings we can obtain from those other animals. In my doctoral thesis, I attempt a practical approach to this question guided by the interspecies endeavour to co-create with marine fish. The lack of a common human-fish language opens this relationship to new modes of communication, focusing on movement and behaviour as a means of communication - a bodily language. Moreover, working at the nexus of the natural sciences and the arts, disciplines commonly perceived as opposites, allows us to alter the disciplinary frameworks and engage in a transdisciplinary approach to redefine the methods of knowledge production and go beyond the disciplines to reconsider the disciplinary concepts of knowledge (Nicolescu 2014).

Co-constructing during the 6th mass extinction

Only 1.5 degrees determine the planetary future. Even if states met the target of the Paris Agreement - limiting global warming to well below 2°C—which, according to the latest IPCC report, is not possible if governments do not implement stricter measures immediately—70-90% of the coral reefs would die, and the sea level would inevitably rise (Lee et al. 2023). Such a prospect for the future demands humanity to develop new approaches to co-existing with other survivor species on an altered planet. This project originated from the gloomy future outlook, in which the sea has engulfed the coastal areas, some of the most densely populated areas of the planet. Then our architecture will no longer be inhabited by terrestrial-bound humans, but underwater organisms will dwell on it. They will sleep, reproduce, and feed in what we once designed as human spaces. Based on this idea, we suggest embracing the idea of the underwater future of currently terrestrial areas, imagining how this could alter our approach to human architecture when the ultimate client is a fish, and eventually redefining our relationship with those non-humans.

Figure 01.Chromis chromis in Posidonia, Mediterranean sea.

As a first step of my PhD, I established Fish Architecture as a tool to communicate, create and learn from damselfish (Wegner et al. 2021). Behavioural ecology, the study of animal behaviour and its evolution, offers tools to closely engage with the animals in their habitats. Working with artists and architects extends the framework of rigorous experimental setups and creates new contexts in which observations and experiments can be conducted. Behaviour and movement become the language humans can try to decipher. Physical structures, informed by scientific literature about specific damselfish species, and designed in collaboration with SUPERFLEX studio, began as an architectural conversation between humans and damselfish. Fish Architecture considers the fish the ultimate client, the animal inhabiting human-made architecture once the warming sea has engulfed those structures. Therefore, the central research and artistic question becomes, “What do Fish like?”. However, the concept differs from endeavours such as artificial reefs, which are focused on the flourishing of diverse marine life. Fish Architecture is a medium to co-create with other species, establishing new ways of interaction and making kin (Haraway 2016). The architectural fish-human dialogue can be considered a collaborative niche construction. This evolutionary concept describes the animal as the designer or engineer of its own environment, thereby changing the selective evolutionary pressure it experiences, creating a reciprocal relationship between organism and environment (Odling-Smee 1988). As soon as all involved animals are designers and engineers of their evolution, ecological interspecies creation becomes possible. Once an architectural invitation, a physical structure, is intentionally placed in the ocean and accepted by the fish, who settle and reproduce on it, this Fish Architecture becomes the ground for a conversation between its occupants (the fish) and the initial designer (the human).

Marine biology meets art and architecture

Our architectural conversation in the Mediterranean started with the damselfish Chromis chromis, who primarily resides in large aggregates in the water column. Only in the summer months, during the

reproductive period of the fish, males descend to the substrate and search for structures to establish temporary nesting sites. Aligned with the waxing moon the damselfish will start their spawning. Once the males have established the territories around their nests, they will start to perform signal jumps to attract females, who will come down to the nesting site to release their eggs (Abel 1961). Chromis choose a variety of structures to establish their nesting sites on (Guldenschuh 1986), which offers an opportunity to test different human-designed structures.

The ongoing analysis describes the changes in behaviour and the social network structure throughout one spawning bout, which should help to understand the temporal socio-spatial relationship, or in other words - how and why the damselfish use the structure the way they do. The structures and our

knowledge about the chromis have evolved throughout multiple field seasons (Fig. 2). For each design, the focus was on the fish and if or how they use the structures designed for them. First, modular structures, Fish Lego, which could be reassembled by humans, were not used by the fish during their spawning bouts. Informed by observations in the field and the artistic decision to use more organic and fractal patterns evolved into Scutoids, which offered more surface areas. Expanding the concept of the Scutoid, maximising overhangs while offering enough surface for nests, the newest set of structures As Close As We Get evolved as an experiment with three structures representing contrasting architectural ideas, which will be tested during the field season 2023.

Figure 02.Evolution of Fish Architecture from 2021 - 2023.

Interspecies Architecture

Humans involved in the process were limited to a small group of scientists and artists, merging information from observations with creative abstraction to create an architectural catalogue of Fish Architecture. However, the influence on the shape of the structure remains a privilege to this small group of people. To gain a better understanding of what humans like about Fish Architecture or which aspects they prefer, I suggest opening the concept to Interspcies Architecture. Interspecies Architecture would include what we have already learned about the fish on the structure and, in addition, allow humans to interact with those structures "curated" by the fish. Thereby fish and human behaviour and movements around and with the structures would be the first approach to a comparative study of two species interacting with the architecture. Such an otherwise impossible comparative study, not possible in the biological disciplinary framework, becomes possible only in the transdisciplinary space created through the collaboration between art and ecology. Moreover, it is not only a study of the use of architecture but also explores the relationship between two organisms populating different ecosystems and spaces connected through the same structure.

A first visualisation of the territories of Chromis on Stucoid was rendered for humans in Interspecies Intimacy (Fig.3). Thereby, video data were analysed and rendered in a virtual space to depict how fish curated the structure through their behaviours. The shapes and sizes of those territories depict where the fish spends most of the time, changing throughout one spawning bout. The virtual environment should not emulate the underwater experience but rather depict an uncanny scene, representing the strangeness and eeriness that might occur when thinking about the rapidly approaching planetary shift, drastically rising sea levels, and submerged human spaces populated by sea creatures. A first step towards an Interspecies Architecture, which allows the fish-human conversation to transcend time, space, and species. As the development of Interspecies Architecture continues, the initial question evolves from “What do Fish like?” to “What do humans like about what Fish like?”. Like the fish, the human would become the study organism. The cross-disciplinary context of art and ecology would allow such a playful comparative study, simultaneously facilitating the co-creation of space by two species, usually populating drastically different ecological niches.

Figure 03.Render of video installation Interspecies Intimacy in collaboration with SUPERFLEX, Alex Jordan & Anja Wegner at A City Beyond: Rethinking Co-Habitation, WE ARE AIA Gallery, Zurich. Video asset (source: https://tinyurl.com/2p9ynkpe)

Fish-human kinship

Humans involved in the design, placement, and observation of Fish Architecture commit to an intimate non-gestational kinship (Hessler 2021) with the Chromis, or, as Isabelle Stengers would phrase it, a reciprocal capture, when both beings become part of each other's (evolutionary) history (Stengers 2010). I refer to those intimate encounters as the Sex Ecology of Chromis chromis, moments of coexistence and cocreation of identities facilitated through Fish Architecture that can happen independent of physical intimacy. Sex Ecology encompasses sexual reproduction and the ecological relationship between humans involved in Chromis's reproduction. The scientist shows appreciation by giving a name and generating more human knowledge about another species to create more attention in the human world for a small fish. Meanwhile, the fish create the scientist's identity by allowing them to be part of their scientific work and who then structure their lives and field seasons according to the rhythm of the fish, the spawning season in the case of the Chromis. What happens if other humans are involved in that process? How do the different species shape each other's evolutionary history? Moreover, how does it change the human perspective on this fish-human relationship?

Scientific analysis is one aspect of Sex Ecology that informs the next stage of the design of Fish Architecture, but through the practice of Sex Ecology, much more happens with the relationship between fish and humans. Although within the scientific discipline, we learn more about the species Chromis chromis, this transdisciplinary approach allows scientists or designers to create a relationship that is not bound nor compromised by the nature/culture duality. As described by De Castro (1998), indigenous cosmologies also delineate concepts of nature and culture, however, the relationship between them is not described from an anthropocentric perspective, creating social distinctions between them, but instead considering the relationship as a social continuity between nature and culture. Indigenous scholars have shared their knowledge and described their kinship with the non-human world, interlaced with Western scientific methods of knowledge production (Kimmerer 2013; Nelson 2008). Informed by such pedagogies, transdisciplinary frameworks facilitate the more-than-human relationships we need to evolve and come out of the mental pitfall we created with the Anthropocene. Nevertheless, Western methods of knowledge production should not be dismissed but need to be revisited, including different perspectives.

Engaging in the Sex Ecology of another animal allows using methodologies from different disciplines to obtain a multitude of knowledges, not bound to the academic and disciplinary boundaries. Fish Architecture, and its extension Interspecies Architecture, are methods developed through a transdisciplinary collaboration and propose such novel methods of interspecies engagement to create a new perspective on the human-fish relationship.

Video

References

Abel, E. F. 1961. Freiwasserstudien über das Fortpflanzungsverhalten des Mönchfisches Chromis chromis Linné, einem Vertreter der Pomacentriden im Mittelmeer. Zeitschrift für Tierpsychologie, 18(4), 441-449.

Bennett, Jane. 2010. Vibrant matter: A political ecology of things. Duke University Press.

CEDAMIA. 2023. Climate Emergency Declarations. Accessed June 20, 2023. https://www.cedamia.org/global/

De Castro, Eduardo Viveiros. 1998. Cosmological deixis and Amerindian perspectivism. Journal of the Royal anthropological Institute, 469-488.

Despret, Vinciane. 2016. What would animals say if we asked the right questions? (Vol. 38). U of Minnesota Press.

Guldenschuh, G. 1986. Das Fortpflanzungsverhalten von Chromis chromis (L.), dem Mittelmeer-Monchsfish (Pisces: Pomacentridae). Doctoral dissertation, Ph. D. Thesis, University of Basel, Switzerland.

Haraway, Donna, J. 2016. Staying with the trouble: Making kin in the Chthulucene. Duke University Press.

Lee, H., et al.. 2023: Synthesis Report of the IPCC Sixth Assessment Report (AR6): Summary for Policymakers. Intergovernmental Panel on Climate Change.

Kimmerer, Robin. 2013. Braiding sweetgrass: Indigenous wisdom, scientific knowledge and the teachings of plants. Milkweed editions.

Morton, Timothy. 2019. Being ecological. Mit Press.

Nelson, Melissa K. (Ed.). 2008. Original instructions: Indigenous teachings for a sustainable future. Simon and Schuster.

Nicolescu, Basarab. 2014. Methodology of transdisciplinarity. World Futures, 70(3-4), 186-199.

Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. 1996. Niche construction. The American Naturalist, 147(4), 641-648.

Stengers, Isabelle. 2010. Cosmopolitics (Vol. 1). Minneapolis: University of Minnesota Press.

Hessler, Stefanie (Ed.). 2021. Sex Ecologies. MIT Press.

Wang, Y., Cao, J., & Yang, C. (2011). Recovery of seismic wavefields based on compressive sensing by an l 1-norm constrained trust region method and the piecewise random subsampling. Geophysical Journal International, 187(1), 199-213.

Wegner, Anja, SUPERFLEX, Alex Jordan. 2021. Fish Architecture: A framework to create Interspecies Spaces. Proceedings of Politics of the machines-Rogue Research 2021.