From Factories to Features: AI's Impact on Labor, Art, and Memory

Y
By YumariInsights & Opinion
From Factories to Features: AI's Impact on Labor, Art, and Memory
From Factories to Features: AI's Impact on Labor, Art, and Memory

\n\nToday, our economy is often described in terms of an \"informational layer,\" a significant shift from manufacturing. Think of it: instead of working on machine parts, many people sit in front of computer screens, handling customer service calls for large corporations or providing other online support. This can happen in an office or a home, contributing to what’s widely known as the \"gig economy.\"\n\nThis new landscape has given rise to unique classes of workers, reshaping our understanding of labor and wealth. McKenzie Wark outlines two key groups: the \"hacker class\" and the \"vector class.\" The vector class controls the \"vectors\" of information abstraction – essentially, they manage the pathways and infrastructure where information moves, much like capitalists controlled the means of production, or pastoralists controlled land. The hacker class, on the other hand, possesses the capacity to create novel forms of objects, subjects, and relationships, often challenging existing property structures. Both of these classes are defined by their reliance on information as their primary product and source of validation.\n\nThis distinction became strikingly clear during the 2020 COVID-19 pandemic. In the United States and globally, those whose work primarily involved data and computers could often transition to remote work with minimal disruption. Meanwhile, workers in service industries or manual labor, who had to be physically present, faced furloughs, layoffs, or job loss. This period exposed how traditional \"blue-collar\" and \"white-collar\" distinctions no longer directly correlated with job security. The critical factor was whether a job could be performed remotely, highlighting how the informational layer has fundamentally reconfigured labor. This disruption makes it clear that artificial intelligence and machine learning are redefining class structures in ways that political discourse often struggles to grasp, still clinging to industrial-era notions. Essentially, global culture has evolved from a manufacturing economy, through a service-based one, to a hybridized information economy.\n\n## Labor, Automation, and Creativity: A Journey Through Time\n\nConsidering this evolution, we can now look at the intertwined relationship between labor, automation, and creativity. The rise of computing itself plays a parallel role in shaping our current socio-economic reality. This journey helps us understand how pattern analysis became central to culture, eventually leading to \"metacreativity\"—the automation of the creative process itself.\n\nWhile we often associate automation with 1960s factories, its roots go much deeper. Ancient Greeks used the Antikythera Mechanism for astronomical predictions as far back as the second century B.C. Modern automation, however, truly took shape with Charles Babbage's conceptualization of the computer, aided by Ada Lovelace, recognized as the first computer programmer. Babbage, inspired by the Jacquard silk-weaving loom's streamlining principles, designed the Difference Engine (1819) and the Analytical Engine (1837) to automate repetitive mathematical processes for an increasingly industrial society. His interest in automating \"mindless\" repetitive labor lies at the heart of the current drive behind artificial intelligence innovation. In fact, this pursuit remains a key force in framing human creativity as our most valued asset.\n\nA century before Babbage's Difference Engine, the first factories emerged in England in 1721, marking a century of increased systematic analysis of the world. Factories put complex theoretical principles into practical action. Karl Marx, in his study of capitalism, viewed factories as abstract systems that dehumanized workers, reducing them to \"numbers\" in a complex production process.\n\nBy the late 1800s, the U.S. government faced challenges with its census. Herman Hollerith developed a punch-card system, drawing principles from the Jacquard Loom and Babbage’s automation ideas. His success with the 1890 census led to the Computing-Tabulating-Recording Company, which later became IBM. IBM, along with Xerox, introduced computers to corporations in the 1960s. Later, Apple and Microsoft played crucial roles in the personal computer market in the late 1970s.\n\nHollerith's tabulation machines coincided with Sigmund Freud's systematic analysis of human behavior, leading to psychoanalysis. While later superseded by cognitive psychology, Freud's approach of treating humans as scientific subjects laid the groundwork for posthumanism, focusing on how humans are intertwined with nature. These examples demonstrate how the world began to be viewed in terms of pattern recognition – as complex systems best evaluated by analyzing their parts. Cultural, economic, and scientific fields adopted systematic approaches to measure and analyze diverse subjects more efficiently.\n\nHowever, a simplistic, purely materialistic view of technology and labor can be misleading. Fernand Braudel warned against this, arguing that improvements like the horse-collar didn't simply \"reduce man’s slavery.\" Similar reductionist thinking, Wark suggests, has displaced human slavery into the informational layer, where global systems extract surplus from producing classes, commodifying even time. This transactional, materialistic drive, fueled by artificial intelligence, reconfigures labor and allows the informational layer to flourish. The efficiency principles of machine learning technologies echo these earlier, sometimes problematic, theories. This drastic push to automate basic tasks eventually started permeating the automation of creativity itself.\n\n## Labor and Art: When Ideas Become Algorithms\n\nThe world of art has long grappled with the definition of \"labor.\" Marcel Duchamp's \"readymades\" in the 1910s, which appeared effortlessly assembled, challenged the notion that intensive manual labor defined fine art. Yet, these carefully developed works became cornerstones of modern art. Duchamp influenced later artists, leading to the conceptual artists of the 1970s who focused on \"dematerializing\" the art object, emphasizing ideas, documentation, and instructions over physical forms. Artists like Sol LeWitt created work that functioned much like algorithms, adhering to a set of rules to produce the art.\n\nToday, the art world generally accepts that artistic practice, even if labor-intensive, is driven by an idea or theme. The challenge arises with machine learning automation, which complicates our understanding of originality when creative processes are delegated to algorithms. Machine learning has made the automation of creative labor possible. This delegation of creative tasks is deeply linked to the concept of \"remix,\" a cultural development amplified by the global adoption of the internet.\n\n## Labor and Remix: The Art of Selective Creation\n\nRemix culture often faces a twofold popular assumption: originality is unique to humans, and it stems from hands-on work. Critics like Henry Rollins famously dismissed electronic music for using sampling and computers, seeing it as lacking \"real\" skill. Even those who acknowledge the creativity of remix, like Kirby Ferguson in \"Everything is a Remix,\" noted that with the internet, \"you don’t even need skills\" to remix and distribute content globally.\n\nThese statements highlight how labor has historically legitimized creative processes. In the 1970s and 80s, music sampling, with its seemingly effortless execution, challenged traditional notions of labor and talent. It was often dismissed as an uncreative act, implying no need for mastering an instrument. Yet, sampling, despite the stigma, questioned originality by compressing labor into modular, executable actions that could be optimized through depersonalization – no human needed to play an instrument. Like musical notes, recorded sounds could be repurposed to create new compositions across all media. Sampling, as a foundational property of remix, enabled the dissemination of ideas and the creation of new forms by absorbing physical labor into intellectual labor, making labor an \"executable abstraction.\"\n\nLooking at the history of music, intense labor has always been linked to creativity and authorship. Since the late 1970s, DJing and music remixing challenged this directly. Melodies and harmonies, once reserved for live studio performances, were replaced by sampling in hip hop and electronic music, as the studio itself became an instrument. Repurposing existing recordings became foundational for remix across media, streamlining creativity to focus on ideas. A producer could treat pre-recorded melodies like basic musical elements for a composition. The catch, however, is that these elements are someone else's previous labor. This transforms samples into \"meta-resources\" – databases of sounds that, without understanding the complexity of selective creative processes, can appear to lack skill.\n\nThis blend of conceptual art and concrete music now finds its full manifestation with machine learning in creative practices across art, music, and literature. Generative AI models and Generative Adversarial Network (GAN) projects like \"This Person Does Not Exist\" create portraits of people who aren’t real, while works by AICAN produce paintings in the style of famous artists. These generative AI examples implement machine learning algorithms to produce images that, without context, appear human-made.\n\n## Labor and Artificial Intelligence: The Rise of Metacreativity\n\nAll layers of production – expansional, optimizational, modular, and informational – contribute to a symbiotic optimization, driven by the constant investment in delegating work. These layers optimize relatively simple actions that don't involve complex decision-making. None are left behind; they are interdependent, forming the infrastructure of our current era. \"Metacreativity\" arises from this surplus of innovation, emerging within the informational layer when decision-making is delegated and merged with specialized actions. Metacreativity pushes the normalization of labor delegation to unprecedented levels.\n\nIn digital art and media, Adobe’s integration of machine learning across its programs is a pervasive example. AI learns user habits to support their creative process. Gavin Miller, Head of Adobe Research, envisioned a future where smart technology helps humans focus on creativity by automating tasks like \"selecting an object\" with \"no clicks.\" That future is arguably here. The question now is whether machines will eventually execute decisions on its own, moving into the privileged realm of creativity itself.\n\nHumans have consistently delegated actions throughout history, whether for efficiency, profit, or survival. This constant implementation of technology for efficient labor reshapes our engagement with the world creatively, encapsulating it in aesthetics informed by automation and machine learning, leading to metacreativity.\n\nSabrina Raaf’s robot, Grower, an installation that exposes the tension of labor automation, reflects this delegation of creative labor to machines. The term \"robot\" comes from the Czech \"robota,\" meaning forced labor, with its Slavic root \"rab\" translating to \"slave.\" This highlights the implications of repetitive labor. Historically, work was broken into manageable, repeatable parts, leading to the factory system. Factory workers often had limited choices, forced into monotonous work by economic pressures. Grower strips repetitive work down to a simple algorithm: based on CO2 levels in a room, it draws lines on a wall. The longer lines, representing more people, eventually resemble overgrown grass, a commentary on how human labor has reshaped nature through technology.\n\nGrower critiques the relationship between labor and aesthetics through automation. By aestheticizing basic labor, Raaf echoes Braudel’s observation that progress requires balancing human labor with other power sources. When human labor becomes costly, replacing it becomes necessary. Artists, like researchers, constantly strive for productivity and quality. Grower is slow but relentlessly productive, guaranteeing a \"creative product.\" Yet, Raaf, not the machine, is the artist. This dynamic may change as selective creative processes are increasingly delegated to machine learning algorithms.\n\nUnderstanding labor in creative practices, especially art, is vital for fair evaluation. Labor is still heavily associated with physical activity, making artistic work often misunderstood as \"unreal\" labor. Art is a major international market, yet its value is sometimes considered superfluous by those focused on basic daily needs. The artist’s labor, often involving constant exposure to the world, is hard to measure. This raises the question: can artificial intelligence truly appropriate all the variables humans intuitively use for selective creative processes? Aesthetics remains one of artificial intelligence development's most complex challenges. The repetitiveness of a job is the foundation for current machine learning algorithms. These self-learning algorithms are designed for efficient single actions, with the goal moving from specialized what is ai to Artificial General Intelligence (AGI)—closer to human intelligence.\n\nMetacreativity, then, is the delegation of creative labor to self-training algorithmic machines. While machine learning frees humans from repetitive tasks (like playing musical riffs), allowing them to focus on broader creative issues (like film directors or composers), it also forces humanity to question its identity as labor becomes aestheticized. Just as humans have historically sought to control nature, machine learning is the latest manifestation of this quest to control all conceivable things. Humans optimize materiality into modular, remixable pieces. Labor, paradoxically, is subjected to itself – a set of executable actions. Remix, when implemented with machine learning as part of the metacreativity paradigm, further exposes the tension between aesthetics and labor, repositioning selective processes, once exclusively human, as non-human algorithms performed by computers.\n\n## Modularity: The Building Blocks of Our Digital Reality\n\nIn 2019, an interesting website, \"This Person Does Not Exist,\" showcased the power of generative AI models. Upon visiting, you'd see a close-up portrait of a person, then a small textbox revealing it was \"Imagined by a GAN (generative adversarial network).\" Each click presented a new, unique face, but none of these people were real. They were unique photographic composites generated by generative adversarial networks from thousands of analyzed images, acting as a vivid generative AI example.\n\nThis project perfectly illustrates \"modularity,\" the principle of combining discrete parts to build complex units. The GAN takes thousands of photos as individual modules, analyzing them to find common patterns. Through trial and error, it composites believable portraits of non-existent people. It’s a streamlined process that creates apparent diversity from the same underlying code – a facade.\n\n## Modularity Through History: From Muskets to Algorithms\n\nModularity underpins all areas of computing, profoundly impacting information-sharing online. Platforms like Google, Facebook, and Twitter rely on modular principles and machine learning algorithms to select content based on user interactions. Modularity is foundational for the informational layer, supports new markets in the expansional layer, and streamlines efficiency in the optimizational layer. Essentially, modularity, at its core, involves executable actions that can be reconfigured into processes, leading to the production of swappable material and immaterial objects. It allows us to analyze the world as interconnected functional units forming complex systems.\n\nHistorically, modularity has been present across cultures but became systematically implemented during modernism, evident in the Gutenberg press’s configurable type. The United States, in particular, saw modular thinking flourish. For example, American education allows students to combine specialized, discrete classes into a chosen field, unlike Europe’s historically more rigid, holistic approach. In sports, American football’s game time is broken into short, discrete plays with frequent stoppages, contrasting with soccer’s continuous 45-minute halves. Even weapon development in the U.S. optimized muskets as units made of swappable parts. This focus on modularity in technology predates the investment in computing research during WWII, which led to the ENIAC for ballistic missiles. This ability for modular conceptualization and practical production prepared the U.S. for industrialization, where factories broke repeatable actions into discrete units. This led to humans being replaced by machines, a trend that expanded from manufacturing to administrative work, pushing people to new forms of labor through \"disruptive technology.\"\n\nModularity’s strength lies in its dual function: ideological and material. Its conceptualization materializes in objects that, in turn, reshape modular thinking, optimizing innovation. Precision, in the form of indexicality (the ability to capture, organize, and measure), enables this exchange between ideology and materiality, allowing for discrete measurement of both the natural and cultural world. Photography, especially, became indispensable for modularity in the 20th century and has been enhanced by machine learning, as seen in projects like \"This Person Does Not Exist.\"\n\n## Modularity and Indexicality: When AI Hallucinates Reality\n\nIndexicality, the ability to measure with precision, is crucial for verifying findings and producing efficiently. Photography, since mechanical reproduction, has been an adaptable technology supporting indexicality, acting as an indicator, sign, or measure.\n\nPhotography is vital for generative adversarial networks, particularly in visual culture. \"This Person Does Not Exist\" leverages photography’s indexicality to analyze vast numbers of images. However, this project demonstrates how GAN-produced photographs are not records of real things. Instead, they are digital composites derived from an analysis of photographic records of what does exist in the actual world. This creates a \"meta-state\" where the default assumption of a photograph being indexical can no longer be made. Viewing an image from \"This Person Does Not Exist\" doesn't allow us to assume a direct indexical relationship to a past event, even if manipulated. Instead, GAN imagery displaces this connection to reality into an abstraction of space and time. This challenges what we traditionally understand as proof or evidence. This detachment from the actual world, as if caught in a \"modular echo chamber,\" appears to feed into an apathy towards real-world consequences, like those of climate change. We face the danger of a homogeneous, displaced representation that no longer needs to rely on the physical world, but on meta-references of it. This is a form of ai hallucination in visual terms, where what’s generated looks real but has no direct real-world referent.\n\n## Modularity, Culture, and Nature: The Unseen Connections\n\nModularity deeply influences the relationship between culture and nature. In the expansional layer, humans define territories and domesticate plants and animals, fostering innovation. Modularity aids this process by compartmentalizing nature, treating it as manipulable parts without fully grasping the interconnectedness – a failure reflected in climate change. Nature, unlike human-made objects, cannot be treated solely as a structure of swappable parts.\n\nThe attitude of the modular layer became fully transparent with online communication: information and data were treated as reconfigurable pieces, shared across networks, and data-mined for media optimization. This tendency culminates in the concept of the \"metaverse.\" Facebook’s rebranding to Meta signifies a strategy to develop a virtual reality platform that mirrors the real world, but is fully modular, allowing designers to swap all parts. These companies aim to reintroduce embodiment into a networked experience often seen as disembodied by some posthumanist theorists. Attempting to create a fully modular \"natural\" space might be even more detrimental for humanity, especially as our real environment faces escalating crises.\n\nEcho chambers, a profound challenge of networked technology, hinder civic engagement and empathy. Art, as a reflective space, can offer ways to bridge these ideological divides, fostering understanding of difference and multiplicity. Modularity has both shaped and been shaped by art and design throughout history.\n\n## Modularity, Art, and Design: Algorithms in Action\n\nArt thrives on modularity. Mechanical reproduction revealed selectivity as a key driver in creativity, a process increasingly delegated to machine learning algorithms. Modularity aligned with artistic strategies like \"chance,\" where interchangeable and reconfigurable elements acted as material and immaterial modules. Dadaists, surrealists, and futurists explored chance in the early 20th century, followed by multidisciplinary artists like John Cage, who used silence to highlight nature in music, and Nam June Paik with experimental films. Yoko Ono’s Fluxus performances explored human fragility. Conceptual artists, aiming for critical distance and minimal subjectivity, put algorithmic principles into practice, drawing inspiration from Marcel Duchamp's repurposing of objects as modular \"readymades.\"\n\nModularity principles also inform contemporary digital art. Artists like Casey Reas, co-developer of the Processing coding library, curated \"Software Structures,\" an exhibition inspired by Sol LeWitt’s drawing instructions. This project established a clear connection between conceptual art and software, as artists developed algorithms that produced diverse visual works from the same basic code. Owen Mundy's \"I Know Where Your Cat Lives\" exemplifies modularity in exposing network privacy, visualizing cat and owner locations from public photo-sharing APIs.\n\nHowever, the \"critical distance\" sought by conceptual artists is never fully achieved with AI models. When designing an algorithm, it's always a selective process, defining how information is gathered, interpreted, and presented. This process is inherently shaped by the designer's inclinations, involving principles of appropriation and selectivity, similar to remix culture.\n\n## Modularity After Remix: The Blended Realities of AI\n\nRemix, as a cultural activity, makes the recombination of elements transparent. It’s closely tied to mechanical reproduction and the understanding that human-built materials and cultural actions can be treated as swappable pieces. Our current concept of remix emerged once modularity became an ideological paradigm, validating the attitude of treating nature and all human creations as large structures of manipulable parts, often without fully understanding their interconnectedness.\n\nThis human-centered attitude manifested in modern culture in correlation with the emergence of recording as a common practice. The treatment of material objects made of modular parts was explored creatively in art in terms of collage. In this case, pieces of pre-existing material such as photographs or reproductions of artworks, and drawings were treated as modules that could be recombined to create new compositions. Collage by German Dadaists such George Grosz, Raoul Hausmann, John Heartfield, Hannah Höch, followed by acts of appropriation in Neo-Dada by Jasper Johns and Robert Rauschenberg are examples of creative approaches that consisted of repurposing and recontextualizing found objects. A similar attitude was at play in sound recording. Musicians experimented with tape recordings: Karlheinz Stockhausen, John Cage, and many others part of Music Concrete and Krautrock approached sound as source material to be recorded and reconfigured as parts through cutting and pasting.\n\nSampling as a creative action in music recording encapsulates this process which eventually became foundational for daily activities in digital culture according to common practice of cut/copy and paste. Sampling in effect turns the fragment into a fully configurable piece that can be repurposed in any conceivable form. With digital technology, sound data can be turned to image and vice-versa—albeit data when translated appears as noise, but certainly artists such as Kory Arcangel have translated data from one form to the other to develop experimental work, and music performers such as Scanner takes electronic sounds from unexpected sources, such as cell-phones to create electronic music compositions. Their approach while not implementing artificial intelligence opens the way for machine learning to develop non-human works of visual art and sound compositions.\n\nSampling revealed that what is generative AI challenges assumptions about creativity as a human-centered activity. While some see sampling as \"lazy,\" it actually exposed a nuanced process of intertextuality and selective emulation. Jazz and Rock & Roll, built on musicians taking riffs from each other, prefigure the automation of this emulation. Hip hop streamlined this with music samplers in the 1980s, leading to the machine learning algorithms increasingly found in contemporary music production. Software like Ableton Live integrates sampling and live performance, optimizing pre- and post-production processes to meet shifting demands.\n\nClose emulation, while foundational for cultural citation, is now mashed with material sampling in \"deepfakes\" in movies. Princess Leia in Star Wars: Rogue One (2016) and the planned appearance of James Dean in Finding Jack are generative AI examples extending actors' careers posthumously. Artificial intelligence is also used for AI voiceovers, such as Anthony Bourdain’s voice in the documentary Roadrunner and Andy Warhol’s narration in The Andy Warhol Diaries. All these examples thrive on remix principles implemented with artificial intelligence, relying on reconfigurable steps, adaptability, and modularity. This is the core principle behind systems like ai chatgpt, where existing data is reconfigured to generate new content.\n\n## Modularity as Binder: The Ethical Crossroads of AI\n\nThe modular layer optimizes information exchange, and modularity, as a cultural variable, binds ideas and concepts with their material manifestations, constantly reshaping our thinking in a loop of ongoing transformation. \"This Person Does Not Exist\" presents a representative abstraction, exposing the danger of a homogeneous state of displaced representation that relies on meta-references rather than the real world. This was evident when the GAN, originally designed to resemble real people, produced images that moved further away from actual forms. This can be seen as a form of ai hallucination, creating convincing yet entirely fabricated realities.\n\nThe repercussions of this trend manifest as an ideology where the real world is increasingly shaped by what machine learning creates. One example is the growing obsession with altering physical appearance to match Instagram filters. This emerging body dysmorphia means people no longer desire to look like idealized celebrities, but rather like algorith-mically smoothed and enhanced versions of themselves, often conforming to a template of whiteness. This implicit bias is also evident in \"This Person Does Not Exist.\" Repeated visits to the site consistently yield images of white people, highlighting the inherent ai bias against people of color in the underlying datasets or algorithms. This has real-world implications, as documented cases of ai bias in U.S. court algorithms disproportionately affect African Americans and other non-white minority groups in sentencing and risk assessments.\n\nAnother prime example is Facebook’s Meta. Mark Zuckerberg’s billions of dollars invested in this virtual living space signal a future that will increasingly depart from the physical world. Discussions with Lex Fridman illustrate this: they joked about their conversation being replayed by avatars in the future, with wardrobes modularly changed daily and a meta-economy emerging. This suggests potential sub-layers of production built upon our physical world’s four layers.\n\nThese generative AI examples and the broader role of modularity in artificial intelligence reveal a struggle: balancing increasingly diverse cultures with homogeneous templates, while simultaneously becoming more detached from the physical world we inhabit. Modularity paradoxically pushes societies toward nebulous idealized forms by constantly reshaping reality through technology, leading to ongoing meta-forms of engagement that redefine embodied and disembodied experience. This constant redefinition, driven by increasingly powerful generative ai models and ai chatgpt-like systems, will challenge our understanding of what it means to be human and creative.

Related Articles