Is Algorithmic Art, Real Art? Exploring AI's Creative Potential

When the art collective Obvious put their piece, "Portrait of Edmond de Belamy," up for auction, it shook the art world. This blurred image of a man in a white-collar shirt and dark blazer, reminiscent of modernist paintings, might seem like a commentary on art history itself. Some critics even saw it as a nod to the "bad painting" era of the 1970s. But the real twist? A generative AI model created this fictional portrait. In October 2018, it fetched a staggering $432,000 at Christie's, marking the first time an artwork produced by an AI model sold for such a high price at auction. This immediately forced a fundamental question: Is it art?
This isn't a new debate in the art world; major creative shifts always spark similar discussions. Marcel Duchamp, for instance, challenged the definition of art with his “readymades” in the early 20th century. He highlighted the tension between manual craft and mechanical reproduction, not by focusing on a specific medium, but by examining the underlying principles of art production. His work, in many ways, foreshadowed the concepts crucial to what we now call metacreativity. As art curator Christiane Paul noted, Duchamp's shift from object to concept, and his use of appropriated "found" objects, paved the way for the "virtual object" and the manipulation of copied images so prevalent in digital art today. The core method of copying and appropriation, central to Duchamp's practice, is now a foundational element in machine learning image processing.
The Obvious collective, by presenting a digitally produced artwork created with appropriated code—a kind of digital readymade—forces us to re-examine the creative process itself, effectively displacing the traditional "hand of the artist." Their piece highlights the complexities of art's legitimacy: if a computer algorithm creates it, what's the validation process? And how did art-making evolve to a point where human labor can be streamlined for automation, culminating in the rise of metacreativity, as we now experience it?
Metacreativity, in essence, is a cultural phenomenon that emerges when the creative process extends beyond human production to include non-human systems. This definition naturally encompasses what is AI and specific generative AI tools, especially when a non-human entity "learns" to produce something that genuinely appears creative. More broadly, metacreativity points to the next stage of posthumanist production, where the lines between human and artificial creation continue to blur.
From Human Hand to Algorithm: A Historical Perspective
Modernism and the Dawn of Delegated Creation
The "Portrait of Edmond de Belamy" showcases a core principle of metacreativity: the selective delegation of creative labor. While we often think of an artist's original touch, the practice of delegating work to human assistants has a long history. Baroque artists like Peter Paul Rubens, and later neo-classicists like Jacques Louis David, relied on apprentices to produce works that the master would then finalize and approve. Andy Warhol famously applied this model in his "Factory" and "Office" in the latter half of the 20th century, where assistants followed instructions—essentially, algorithms—to create pieces he would later finish or approve. The key difference with machine learning today is that these repetitive actions are performed by computers, not humans.
The seeds of algorithmic thinking in art can be traced back to modernism. The secular worldview emerging from the Enlightenment, focused on the individual as a scientific subject, influenced artistic production. Neo-classicism, despite its rules for perspective and figure rendering, laid down analogical algorithms that realists and impressionists later "hacked." A significant shift came with 19th-century photography, which redefined art by introducing mechanical reproduction, challenging the notion of the human hand's exclusivity in legitimizing art. Unlike early photography, which extended human production, modern generative AI models can learn and improve tasks autonomously, including art creation.
Artists in modernism often embraced scientific approaches. Georges Seurat and his contemporaries explored color theory based on scientific principles of visual perception. Surrealists applied Sigmund Freud's psychoanalytic theories to develop "automatism," producing art from dreams. This era shows a growing focus on the individual's interaction with cultural systems.
Concurrently, the concept of the computer was taking shape. Charles Babbage and Ada Byron (often considered the first computer programmer) conceived of the Difference Engine (1822) and Analytical Engine (1837), the latter including a conditional "if, then" logic—a cornerstone of modern programming. This early research led to the first programmable computers in the 1940s, like ENIAC in the U.S. and the Manchester Mark 1 in England. The increasing systematization and computational methods began to influence how creativity itself was viewed.
Artists like Marcel Duchamp moved beyond the idea of the artist as an original creator, repositioning them as an "interpreter of the world," a "human compiler" who selected and recontextualized industrially produced objects. Later, Nicholas Schofer's Cysp sculptures (1950s) combined robotics and kinetic art, reacting to light, sound, and movement—early examples of cybernetic art. Vladimir Bonacic's G.F.E. (1969–1971) explored interactive principles using mathematical algorithms, turning art into an engagement rather than just an object of contemplation.
Laszlo Moholy-Nagy's "telephone paintings" offer a clear example of algorithmic art, where he dictated instructions for five porcelain enamel paintings over the phone in 1922. These instructions were a precise algorithm, and his intent was to challenge the "individual touch" in art. John Cage further pushed this idea with works like "Imaginary Landscape" (1951) and 4'33", where unpredictability and audience participation became central. These pieces were early explorations in delegating creative control, anticipating how AI tools would later reshape artistic production.
Postmodernism: Modularity and Early AI Visions
Postmodernism, from the mid-1970s to late 1980s, questioned linear history and progress, embracing pluralism and fragmentation—concepts deeply linked to modularity, which is foundational to computing. The computer itself, a multipurpose machine, reflects this modularity, offering customizable experiences with various software apps. This modular thinking, as scholars like Fernand Braudel and Gilles Deleuze observed, is embedded in capitalism and even in the design of machines.
Randomness, explored by Dadaists in the early 20th century, was updated by Fluxus artists in postmodernism. Nam June Paik's "Random Access" (1963) allowed viewers to randomly play segments of tape, showcasing a modular approach to sound art. Jean Tinguely's "Homage to New York" (1960), a self-destructing mechanical sculpture, metaphorically explored machines performing autonomous actions, anticipating the concerns around agentic AI in the future. These works emerged when cybernetics was a nascent field, leading to collaborations like Experiments in Art and Technology (E.A.T.), which brought together artists, engineers, and scientists.
Artists like Les Levine created "Contact, A Cybernetic Sculpture," exploring human perception enhanced by machines. Early computer art, such as Kenneth Knowlton and Leon Harmon's "Studies in Perception 1," which converted images into pixelated approximations, were precursors to the digital filters and generative AI tools we see today in platforms like Photoshop. Nicholas Negroponte's "Seek" (1969–70), an installation with gerbils and a robotic arm, explored intelligence and simple machine learning principles, blurring the line between conceptual art and cybernetics.
All these postmodern examples, with their disparate artistic approaches, highlighted modular complexity—the core of computing. This period saw the normalization of dynamics that would later become central to digital art, particularly in its conceptual forms, preparing the ground for the delegation of creative processes to machine learning.
Conceptual Art: Ideas as Algorithms
Conceptual artists used systematic methods for critical, reflective, and analytical art. Often framed as a shift from object to idea, particularly in the U.S., conceptual art aimed to dematerialize the art object. While sometimes seen as clinical, its legacy is now understood as a diverse global practice exploring art's functions within its institutional and worldly contexts.
Semiotics, a structural discipline, found its way into conceptual art. Martha Rosler's "Semiotics of the Kitchen" (1975) questioned gendered object use and labor through an agitated, algorithmic demonstration of kitchen tools. Lawrence Weiner's statements, like his 1970 "Arts Magazine" piece outlining three possibilities for an artwork, were algorithmic principles applied at a meta-level, critiquing art production through metalanguage.
Joseph Kosuth further solidified conceptualism's structural foundation, differentiating between "Theoretical Conceptual Art" (idea-focused) and "Stylistic Conceptual Art" (aesthetic results). Sol LeWitt's instruction drawings, executed by gallery staff, are classic examples of open-ended algorithms, like "On a black wall, pencil scribbles to maximum density." John Cage's influence on system-based art further demonstrated how artistic creation could stem from specific rules or chance, fundamentally algorithmic in nature. This period firmly shifted creativity from individual expression to a systemic evaluation of the world, informed by complex structures that constantly redefine cultural value and creative agency, laying the groundwork for how we understand what is AI's role in artistic creation.
Digital Art: Bridging to AI-Powered Metacreativity
Digital artists have a direct lineage to conceptualism, often seen as its pivotal predecessor. Rachel Greene and Christiane Paul highlight this historical connection. For example, On Kawara's date paintings, which recorded daily activities, anticipated the value of data-mining and the commodification of the everyday, a principle now central to many generative AI models in art. Vito Acconci's process-based performances, like "Proximity Piece" (1970), which used an if/then/else conditional for audience interaction, were like pseudo-code for a computer program.
Casey Reas, inspired by LeWitt, reconfigured LeWitt's instructions for artist-programmers in his "Software Structures" exhibit, leading to works where machines drew abstract lines of increasing complexity. These projects explicitly explored machines performing creative tasks. Early AI tools in art installations included Christa Somerer's "A-Volve," where visitors sketched creatures that evolved in a simulated environment based on "survival of the fittest" rules, demonstrating early AI-like biological simulations, though without true autonomous learning.
Ken Finegold's "If/Then" (2001) featured humanoid heads with seemingly random yet sometimes coherent conversations, based on algorithms referencing human interaction. Christiane Paul saw these as more than mere AI projects, noting their "metaphoric implications" for questions of existence. Ben Rubin and Mark Hansen's "Listening Post" (2001) utilized big data principles and machine learning algorithms to collect and analyze phrases from online platforms, presenting them on screens with robotic voices—highlighting the cultural tension between humans and AI.
Lynn Hershman Leeson's "CybeRoberta" (1970–98), a telerobotic doll, explored human-technology relationships and identity, with dolls programmed to "hack" each other. While not using current what is AI technology, it prefigured the implications of machine and human interactivity crucial for machine learning development. David Rokeby's "The Giver of Names" (1990–present) is an interactive installation where a camera recognizes objects placed on a pedestal and generates sentences, continually updated and moving closer to an "Alien Intelligence"—a nod to the future of agentic AI and concepts explored by companies like OpenAI.
The Future of Creativity: Distinguishing Human and Artificial Intelligence
When we revisit the "Portrait of Edmond de Belamy," it's clear that art has undergone a profound transformation. The journey from the human hand to the algorithm shows a steady delegation of creative processes. Today's machine learning excels at focused tasks, improving through trial and error within predefined parameters. However, the vision of true "Artificial General Intelligence" (AGI) or agentic AI, capable of assigning itself open-ended creative goals, choosing subjects, themes, and critical perspectives, is still developing. This is the frontier where AI might truly eclipse, and perhaps even supersede, human creativity.
Art continues to play a vital role as a cultural mirror, reflecting and critiquing these advancements. It appropriates and subverts innovation, challenging the human tendency to prioritize convenience (often enabled by generative AI tools) at the expense of ethical and moral considerations, or even the planet itself. The rapid development of generative AI models by organizations like OpenAI continues to blur the lines between human and machine creativity, pushing us into uncharted territory where the definition of art, and even the artist, is constantly being redefined. It's an exciting, sometimes unsettling, future, where critical reflection and proactive engagement with technology are more important than ever.








