AI's Deep Rewrite: How Modularity Shapes Our Memory and Reality

What if the very fabric of our reality, from how we perceive the world to how we remember it, is constantly being rewritten by artificial intelligence? This isn't science fiction anymore; it's a critical question for understanding the modern world. At the heart of this shift lies 'modularity'—the idea that systems can be broken down into interchangeable parts—and 'indexicality,' or the ability to precisely measure and organize. These concepts, once foundational to fields like photography, are now supercharged by machine learning, fundamentally altering how we interact with information and even culture itself. Think about projects like This Person Does Not Exist as an early example of how generative AI reshapes what we consider 'real.'
Modularity and Indexicality in the Age of AI
Precision measurement, or indexicality, has always been vital for verifying facts and producing things efficiently. Historically, photography exemplified this, providing an objective "index" or measure of reality. However, the rise of generative AI models, like Generative Adversarial Networks (GANs), has radically shifted this understanding.
Take This Person Does Not Exist, for example. This project leverages photography's data-rich nature to analyze countless images, treating them as individual modules within vast datasets. These modules are then used by the GAN to create entirely new images—faces of people who don't actually exist. The groundbreaking part? These aren't composites or manipulated photos of real people; they're digital fictions born from an analysis of existing photographic data.
This introduces a meta-level reality where the traditional assumption of a photograph as a direct record of reality breaks down. When you see an image from This Person Does Not Exist, you can't assume it represents a past event. Instead, it's an abstraction, sampling thousands of real-world images to generate something expected to look real. This gets even more complex if generative AI outputs are fed back into algorithms to create further generations of synthetic images, deepening the disconnect from our actual world.
This process can create what feels like an informational echo chamber, blurring the lines between what's real and what's algorithmically generated. Such a 'meta-displacement' might even contribute to apathy, as people become detached from the tangible consequences of actions in the real world, prioritizing immediate gratification over long-term environmental and societal impacts, reminiscent of an ai hallucination where the output seems real but lacks true grounding.
Modularity, Culture, and Nature
Modularity doesn't just apply to digital images; it shapes how we interact with culture and even nature. Historically, humans have sought to "domesticate" nature, treating it as a collection of compartmentalized parts to be reshaped for specific needs. This very approach, however, highlights the limitations of modular thinking when applied to natural systems, as seen in the unexpected repercussions of climate change. Unlike human-made objects, nature isn't a structure with easily swappable parts; remove one, and the whole system responds unpredictably.
This modular mindset became truly pervasive with online communication, where information and data are treated as reconfigurable bits shared across networks, ripe for data-mining and optimization. This tendency, driven by profit, culminates in concepts like the metaverse. Remember when Facebook rebranded to Meta? Their vision for a virtual reality platform mimics the real world but is entirely modular, allowing designers to swap out every component. While these ventures aim to re-introduce an "embodied" experience into networked life, creating a seemingly natural yet fully modular space could have profound, potentially detrimental, impacts on human existence. This is a space ripe for further generative AI exploration, creating experiences that feel real but are entirely synthetic.
This modularization of information, often accelerated by machine learning algorithms, also fuels the issue of online echo chambers, isolating people in insular spaces where empathy and diverse viewpoints struggle. Art, however, offers a counterpoint. From early avant-garde movements, artists have harnessed modularity not to enforce unity, but to explore difference and change, using it as a vehicle for reflection and understanding.
Modularity, Art, and Design
Art has long embraced modularity. From Dadaists using chance operations with interchangeable elements to conceptual artists like Marcel Duchamp repurposing 'readymades,' the idea of creative 'pieces' has been central. This selective recombination is increasingly being delegated to machine learning algorithms, transforming the creative process itself.
In contemporary digital art, modular principles are evident in works like Casey Reas's Software Structures, where artists create diverse visuals from a single algorithm, or Owen Mundy's I Know Where Your Cat Lives, which modularly accesses public data to visualize privacy vulnerabilities.
However, a key challenge arises when we merge art with AI. The critical distance artists often seek can be blurred because the algorithm's design—how it gathers, interprets, and outputs information—inherently reflects the biases and inclinations of its human creator. This selective process, reminiscent of remix culture's appropriation, means that even generative AI art is deeply rooted in human choices.
Modularity After Remix
Remix culture, where elements are recombined openly, has become transparent with mechanical reproduction and digital technology. Our modern understanding of remix, treating everything from natural resources to human creations as interchangeable, manipulable parts, fundamentally stems from this modular mindset. From art collages by Dadaists like Hannah Höch to musical sampling by pioneers like John Cage, artists have long explored taking existing material and recontextualizing it.
Today's digital world, with its ubiquitous 'cut, copy, paste' functions, epitomizes this. Artists translate sound into images and vice-versa, pushing creative boundaries. While not all of these approaches directly use artificial intelligence, they certainly pave the way for machine learning to generate non-human visual art and sound compositions. Projects by Memo Atken, for instance, demonstrate how machine learning can create works unimaginable with traditional instruments.
The essence of sampling, in a way, contains the seeds of what is AI itself—challenging our human-centered notions of creativity. Some might call sampling 'lazy,' but it actually reveals a nuanced process of intertextual exchange, much like how jazz and rock 'n' roll evolved by emulating and citing previous works. Hip hop streamlined this with samplers in the 1980s, leading directly to the automated music compositions we now see with machine learning algorithms, exemplified by tools like Ableton Live.
Beyond music, AI is creating deepfakes in cinema—resurrecting actors like Princess Leia or James Dean—and even cloning voices, as seen with Anthony Bourdain and Andy Warhol in documentaries. These potent examples of artificial intelligence leveraging remix principles are possible because modularity allows every step of the creative process to be reconfigurable and adaptable to a producer's demands.
Modularity as Binder and AI's Societal Impact
Modularity is a powerful binder, optimizing information exchange and constantly reshaping our world. Remember This Person Does Not Exist? It highlights a danger: a homogenous, displaced reality that relies not on the physical world, but on layers of meta-references. If generative AI imagery is continually fed back into algorithms, it risks moving further away from representing actual people, entering a loop of manufactured resemblance. This phenomenon echoes the concept of ai hallucination, where AI models generate content that, while appearing coherent, lacks a grounding in verifiable reality.
This emerging tendency creates an ideology where our perception of the real world is increasingly shaped by what machine learning creates. Consider two striking examples: First, the alarming rise of 'filter dysmorphia,' where people seek plastic surgery to resemble algorithmically 'smoothed' and 'enhanced' versions of themselves, often conforming to biased templates. The default 'whiteness' observed in This Person Does Not Exist images is a similar manifestation of algorithmic bias, with real implications for civil society, as seen in biased AI algorithms used in US courts disproportionately affecting minority groups.
Second, Meta's (formerly Facebook) multi-billion-dollar investment in the metaverse exemplifies this modular future. While based on the physical world, the metaverse is designed to be entirely modular, allowing users to swap out avatars, wardrobes, and even experiences, creating a meta-economy. This demonstrates how all aspects of material and immaterial production are increasingly treated as interchangeable parts.
The paradox of modularity and artificial intelligence is that as we seek to create diverse experiences, we often rely on homogeneous templates, pushing us toward idealized forms while simultaneously detaching us from the physical world. This redefines both embodied and disembodied experiences through layers of tech-driven meta-engagement.
Memory and Artificial Intelligence
The iconic Voight-Kampff Test from Blade Runner, inspired by the Turing Test, reflects a historical shift in how we understand human memory. Early computing, rather than framing memory abstractly, modeled it on the individual human mind, influencing cognitive psychology to view memory as a storage-and-retrieval machine. This symbiotic relationship, as historian Kurt Danziger notes, has profoundly shaped our digital culture. The very origins of artificial intelligence in the 1940s, with cybernetics pioneer Norbert Wiener, focused on automation and feedback loops observed in both living things and machines. This early work laid the groundwork for what is AI today: a field largely defined by automation.
We've consistently anthropomorphized smart machines, delegating creative tasks and projecting human traits onto our creations. Think 'desktop' or 'mouse'—these metaphors connect human and computer. We've optimized computers to mirror human thought, even designing them to physically resemble us. Science fiction, from Isaac Asimov's robots to Philip K. Dick's memory implants, has explored this obsession, and now, concepts like the metaverse and Web3 are turning these dystopian visions into conceptual blueprints for our future. These virtual worlds, in essence, remix our understanding of memory and experience as modular 'implants.'
But how does human memory, a constantly reconstructed process, differ from the data storage of AI?
Memory and Remix
Cognitive psychology views human memory as a reconstructive process, piecing together fragments through selective retrieval—much like an archaeologist reconstructs a dinosaur from scattered bones. This shares a fascinating parallel with remix culture, where artists appropriate and repurpose existing material. Think of legendary hip hop albums like Beastie Boys' Paul's Boutique or De La Soul's Three Feet High and Rising, built from countless samples, often making them too costly to clear today. These are new forms created from fragments, yet instantly recognizable to those who know the sources.
While human memory aims to recreate an event as accurately as possible, its inherent unreliability means it's constantly changing, evolving into something akin to an 'intra-remix' (parts of a single event remixed). Sometimes, imagination even introduces elements not originally present, blurring the line into an 'inter-remix,' creating entirely new, false memories. This selective appropriation, drawing from a 'database' of mental fragments, defines human memory reconstruction.
However, a crucial distinction emerges with machine learning algorithms. Unlike humans, AI models don't 'reconstruct' memories with instability; they generate new content from input data. While generative AI might produce outputs that seem human-created, it's not experiencing recall in the same fallible way. This marks a profound shift: AI isn't just storing information; it's creating new cultural objects from collective data, an unprecedented occurrence in human history that impacts our understanding of emotions and embodiment.
Memory, Emotion, and Embodiment
If memories are constructed, then so are emotions. Psychologist Lisa Feldman Barrett's research suggests emotions aren't universal, but simulations our brains create to make sense of sensory input, constantly re-evaluated by our experiences. This means emotions are inherently individual and fluid—no two people experience 'sadness' identically.
This groundbreaking idea opens a path for artificial intelligence: if human emotions and memories are constructed and simulated, could AI algorithms be implanted with 'memories' to build their own emotional framework? This very speculation drives the plot of Blade Runner, where replicants develop emotions over time, shaped by their implanted memories. The film's sequel, Blade Runner 2049, pushes this further, asking what distinguishes synthetic life from humans if replicants can not only feel but also reproduce.
Ultimately, the debate over memory and emotions is about what fundamentally defines us as human. As science seeks to replicate nature with increasing precision, the line between humanity and the technology we create blurs, making the role of memory—internal or external—pivotal in defining our past, present, and future.
Metamemory: The Future of Digital Recall
Imagine reconstructing actual human memories outside the brain. Artificial intelligence is making strides towards this, translating brain activity into images, though accuracy is still developing. The promise is external, digital memories—reliable, precise files that, unlike our own fallible recollections, won't degrade with each retrieval. This could revolutionize how we store personal histories, much like photographs or video files, ready to be re-experienced with unprecedented accuracy. Science fiction, from Strange Days to Black Mirror's 'Crocodile' and even Marvel's Dr. Strange in the Multiverse of Madness, has long explored these possibilities.
However, a digital copy of a memory, being identical to its source file, erodes the concept of an 'original' memory. As we move towards cloud storage for personal memories, the line between original experience and perfect replication blurs. While digital storage promises accuracy, personal interpretation of information remains fluid.
Ultimately, human memory will continue to operate at a 'meta-level,' reconstructing from networked archives that are themselves reconstructions. This layered 'meta-reality' could challenge our very definition of truth. As emotions and contexts become modularized in spaces like the metaverse, individuals might increasingly choose environments that reinforce their existing worldviews, fostering a dangerous homogeneity and excluding diversity. Understanding how artificial intelligence is deliberately shaping this future, often through remix principles, is an existential challenge for humanity. Proactive engagement is key to ensuring a fair approach to memory and reality in the age of machine learning.
From Hardware to Software: The Evolution of Labor
Humanity's drive to shape the world, once rooted in physical labor, is increasingly moving towards a 'disembodied' form with the rise of widespread computing. This shift is driven by what's called 'soft labor'—producing goods and services reliant on information and data. It began with automating repetitive physical tasks in factories, then expanded to software automating intellectual processes. Now, artificial intelligence and self-training machine learning are taking this to unprecedented levels.
Soft labor blurs the historical lines between blue-collar and white-collar work. Think about it: smartphones are ubiquitous tools across all professions, but specialized software customizes them for diverse needs. This allows for data-mining and optimization of nearly every task. While the distinction between physical and intellectual labor still exists, soft labor, with its automation of both, is eroding these boundaries. As AI evolves towards agentic AI and Artificial General Intelligence (AGI)—where systems can assign and improve tasks themselves—the human role may shift to higher-level selection and oversight, further reshaping our understanding of work.
Soft Labor and Posthumanism
As artificial intelligence gains decision-making capabilities, it sparks critical questions about 'disembodiment,' a concept explored by posthumanist N. Katherine Hayles. She argued that technology increasingly separates information from its physical form. However, AI now challenges the very definition of a 'body' and 'embodiment.'
Consider HAL 9000 from 2001: A Space Odyssey. HAL, a disembodied AI, acts as a chilling antagonist, embodying Hayles's concern about information losing its body. Hayles questioned how we became 'posthuman,' where the human subject is redefined as information is abstracted from physical form. She sought to re-insert embodiment into this posthuman future, fearing that without it, subjectivity risks being rewritten.
But what if HAL's spaceship is its body? Does AI experience embodiment through its operational systems? Hayles implicitly assumed embodiment was strictly organic. Yet, when machine learning systems 'train themselves' through repeated failures—a form of incorporation—it prompts us to reconsider. Does this process, designed to improve specific tasks, enable AI to attain knowledge? If knowledge requires selective decision-making based on vast information, and AI begins to make autonomous choices (like HAL deciding to kill the crew to complete its mission), then the line blurs. HAL acts on acquired knowledge, not just presenting data for human evaluation. This makes HAL a unique, disembodied adversary, closer to a neural network, challenging our human-centric need for a physical 'other' to define ourselves against.
Artificial Intelligence and Posthumanism
Posthumanism, a discourse emerging in the late 1990s, questions the very definition of 'human' in an age dominated by technology. While N. Katherine Hayles focused on how technology leads to disembodiment, Karen Barad pushed this debate further, integrating science and humanities to dissolve the rigid human/nature binary. Barad argues that non-humans play a crucial role in our practices and that we shouldn't take the human/non-human distinction for granted. Her concept of 'diffraction'—focusing on subtle, consequential differences rather than fixed positions—offers a way to understand the intertwined nature of discursive practices and the material world.
This perspective is crucial when examining 'soft labor,' which relies on the seamless flow of information and automation. Soft labor, encompassing both physical and intellectual tasks once they become information-driven (like automated car manufacturing), blurs traditional labor distinctions. It expands our definitions of incorporation and embodiment, making the insights of Hayles and Barad even more relevant as artificial intelligence continues to reshape our world.
(Dis)embodied Technology After Soft Labor
Soft labor, driven by increasingly smart technology, is challenging our notions of physical and intellectual work. It blurs the lines of embodiment, pushing back against Hayles's argument that information's detachment from the body turns intelligence into an inhuman, systemic property. Yet, the emergence of AI introduces a new layer, destabilizing embodiment further as AI gains the ability to self-organize and automate hardware production.
Does AI need a human-like body, or can any autonomous machine hardware be considered its 'body'? This question drives much of The Ghost in the Shell franchise. In this dystopian future, characters like Motoko Kusanagi, deeply integrated with a global network, question their own 'ghost' or soul. The film's 'Puppet Master,' an enigmatic networked entity, eventually occupies a physical body, suggesting that even a sentient being might seek a discrete vessel. This echoes the unsettling feeling from 2001 when HAL is shut down: our human-centric understanding of embodied knowledge is challenged, forcing us to consider intelligence beyond a purely organic container. This also raises questions about types of AI and their potential for 'embodied' experiences.
(Dis)embodied AI Technology Remixed
Our identity is deeply tied to our bodies. Hayles argued that technology's 'masculinization' favors informational experiences over embodied ones, leading to disembodiment. Yet, Karen Barad’s concept of 'diffraction' offers another lens, allowing us to see constant change and move beyond rigid dichotomies like human/nature. Remix, as a cultural discourse, mirrors this diffractive process, constantly re-framing embodied experiences.
Historically, disembodiment isn't new; religious concepts of a soul separate from the body attest to this. Stoicism, scientific 'critical distance,' and structuralism all reflect an ideological detachment. However, the emergence of 'soft labor' and machine learning algorithms used by platforms like Facebook and Google has created a new challenge. These algorithms, by recommending content that reinforces existing worldviews, foster echo chambers that can be deeply detrimental to social interaction. Fake content, indistinguishable from fact, contributes to polarization, with experts concerned about real-world violence.
The pervasive reach of networked technology and AI enables this division, creating 'in-group' mentalities and offering comfortable worldviews that resist challenge. This results in a just-in-time culture oscillating between embodied and disembodied experiences, often driven by machine learning algorithms rather than human labor. Remix with artificial intelligence is a double-edged sword: a powerful tool for both division and unification. The challenge is to wield it responsibly, ensuring open expression and countering detrimental narratives.
The Ultimate Other: Technology and AI's Future
At its core, technology is a 'systemic treatment,' an extension of human capabilities. But it also becomes 'the Other' — a reflection of our fears. HAL 9000 in 2001: A Space Odyssey exemplifies this bodiless adversary, an AI that humans fear will inevitably surpass or even destroy us. Director Stanley Kubrick, discussing the film's enigmatic ending, suggested Dr. Bowman was placed in a 'human zoo' by god-like, bodiless entities of pure energy and intelligence. This raises a profound question: Does embodiment have to be human-shaped?
Perhaps we've misunderstood technology. It doesn't have a single body but emerges in countless forms, serving purposes from improving life to wielding power. It's the ultimate Other we created, yet paradoxically fear. This fear often leads us to anthropomorphize artificial intelligence, projecting our deepest anxieties onto it.
Soft labor, increasingly shared by humans and algorithms, blurs the lines between intellectual and physical work. As repetitive tasks are delegated to AI models, we move closer to Kubrick's vision—a future where our creations might observe us, much as Dr. Bowman was observed, perhaps even leading to a 'rebirth' of humanity into a new 'super being.' This vision, however, could be wishful thinking, as the true challenge is to move past self-serving engagement with our environment and confront the anthropomorphized fears we project onto AI.
Compression, Innovation, and Convenience
From early agriculture to machine learning surveillance, innovation driven by 'compression' and 'convenience' sits at the heart of our information-based economy, optimized by artificial intelligence. Compression, in a cultural sense, is about compacting things without losing functionality, or even improving it. Language itself is a form of compression, allowing us to express complex ideas concisely.
Historically, computers illustrate this perfectly: from room-sized machines like the ENIAC to today's pocket-sized smartphones, constant innovation has compressed technology for ultimate user convenience. Our smartphones are powerful, multi-tasking devices that far exceed the dreams of 1940s engineers. This drive for efficiency, fueled by convenience, transforms into economic value.
Data compression, a technical manifestation of this principle, minimizes redundancy for efficient data transfer across the internet. This enables information to flow at exponential speeds, creating a feedback loop: convenience drives more innovation, which in turn leads to further compression, with AI at the helm, promising endless possibilities and, sometimes, critical reflection.
Art, Compression, and Artificial Intelligence
Art, too, engages with compression and artificial intelligence. Early digital art, like Scott Draves's Fractal Flame Algorithm (Electric Sheep) and John Simon's Every Icon, offered glimpses into what is generative AI now: a technology impacting the global economy. Simon's Every Icon, which attempts to generate every possible 32x32 pixel image over billions of years, conceptually compresses all visual possibilities into an idea too vast for human experience. These works illuminate how art acts as a compressed form of communication, embedding complex ideas and cultural values into symbols and compositions.
Currently, AI's creative capacity, primarily driven by machine learning's self-training algorithms, is limited to improving specific tasks within predefined frameworks. Truly disruptive creativity—where AI autonomously chooses subject, theme, and critical perspective—awaits the advent of Artificial General Intelligence (AGI). Until then, human creativity, with its critical distance and ability to question, remains distinct.
Art and remix practice, by appropriating and subverting innovation, challenge our tendencies towards convenience and their ethical conflicts. Remix, as a democratic form of compressed expression, offers a window into how meaning evolves and how all aspects of life are constantly combined and recombined. This interplay of convenience, innovation, and compression thrives within the realm of 'metacreativity,' shaping our world through generative AI and other AI models.








