How AI Fuels Conflict in Our Digital Battleground

The internet, once envisioned as a free, anarchic network connecting people globally, has morphed into a significant theater of state competition. It's become militarized, a domain where nations project power, achieve strategic goals, and even wage war. In this evolving digital landscape, artificial intelligence (AI) is rapidly developing as a general-purpose technology, reshaping nearly every aspect of our lives, including international security. The growth in computing power, vast data production, and advanced algorithms mean algorithmic decision-making is now commonplace in defense and security. We're seeing AI assist in strategic choices, manage critical infrastructure, boost cyber defense capabilities, and even track nuclear deterrence assets in real-time.
This new era, where cyberspace has emerged as a battleground, has intensified international security challenges. AI is set to accelerate this confrontation in and through cyberspace, adding another layer of unpredictability to an already complex and ambiguous domain. Clear signs of a technological arms race for cyber supremacy are evident: the global decoupling of the ICT supply chain, the fragmentation of the internet, and massive AI investments in countries like the United States, China, and Russia. AI, in this context, acts like a powerful force built upon the foundation of cyberspace, inherently shaped by its rules and principles while simultaneously helping to define them.
Just as political efforts to reduce escalation risks in cyberspace have struggled to reverse the global race for cyber superiority, it's highly unlikely that the international community will establish common principles for regulating algorithmic warfare. For liberal democracies, this presents a dual challenge: maintaining their historical technological edge and international influence, while simultaneously upholding the human-centric values and principles that differentiate them from autocratic regimes.
Cyberspace: A Domain of Ambiguity and Asymmetry
Cyberspace has become too integral to daily life not to be a focal point where national interests inevitably clash. What is AI enabling in this space? Cyber power – defined as the ability to leverage cyberspace to gain advantages and influence events across various operational environments – is clearly a new dimension of 21st-century sovereignty. This isn't entirely new; as data privacy, accessibility, and integrity become more crucial for national security, states are compelled to boost cybersecurity, making offensive cyber actions increasingly appealing. This mirrors historical developments in terrestrial, naval, and aerial warfare, and even in outer space.
Cyberspace is often called the "domain of ambiguity" because high-level threats operate alongside low-level skirmishes and criminal activities, often sharing similar technical characteristics. It's tough to fully grasp the motivations and scope of a cyber campaign without understanding its broader strategic, political, and operational context. The difficulty in attributing cyberattacks, combined with the widespread use of false-flag operations, makes it hard to discern "what is really going on" in the digital realm. While national intelligence agencies are better equipped to handle sensitive information and understand the complexities "behind the curtains," this also means the general public rarely gets an in-depth view.
Furthermore, while a cyberattack can be meticulously planned, its execution demands an immediate response. Yet, the true intent of a cyber campaign often only becomes clear once its objectives are met. Cyber Network Operations (CNOs) are also inherently asymmetric: attacking is easier and far less expensive than defending. The potential benefits of an attack far outweigh the risks of retaliation. Plus, the short lifespan of "zero-day" vulnerabilities creates dangerous "use it or lose it" dilemmas for policymakers during crises. This makes cyberspace the preferred domain for destabilizing campaigns and hostile activities that would be unsustainable in conventional warfare.
Every nation regularly collects intelligence, including through CNOs and signal intelligence (SigInt) for cyber defense. These activities aren't forbidden by international law, and states have a legitimate goal to strengthen all aspects of their sovereign power. In a security environment where "homeland is no longer a sanctuary," cyber power is essential for national security. In this sense, cyberspace is merely a new arena for ongoing international confrontation, but with crucial differences: geographical borders are irrelevant, actors are often unknown, civilian assets are frequent targets, and rules of behavior are hard to identify, establish, and enforce.
For over two decades, the international community has tried to identify agreed-upon rules for state behavior in cyberspace, with limited success. A cyber armaments control regime seems unlikely given low trust and impossible verification. While some progress has been made within organizations like the OSCE through voluntary confidence-building measures, these don't constitute binding norms. Establishing clear norms and deterring malicious cyber campaigns is hard enough among states, but it becomes even more challenging against non-state actors. In today's security landscape, non-state groups can significantly destabilize the traditional international order, especially in cyberspace, where "David" can often defeat "Goliath."
Non-state actors exploit the relative impunity, low barriers to entry, and widespread vulnerabilities in ICT networks. The use of cyber weapons by terrorists, for instance, is a worrying possibility, especially considering how easily knowledge and ready-to-use weapons can be acquired on the dark web. Transnational cybercrime organizations are also major players, heavily investing in R&D for new offensive capabilities, thus contributing to international cyber arms proliferation. These syndicates are difficult to dismantle due to their economic power and global reach. Police and judicial cooperation is complicated by attribution challenges and criminals' willingness to act as proxies for states seeking plausible deniability. In this ambiguous environment, hackers, hacktivists, companies, and individuals all contribute to the volatility of security in cyberspace, pursuing diverse military, political, and financial interests.
Tech Supremacy and the AI-Driven Arms Race
Amid this uncertainty, major powers are actively pursuing "cyber superiority" – a degree of dominance enabling secure, reliable operations without prohibitive interference. This means seamlessly maneuvering between offense and defense across the interconnected battlespace, globally, continuously shaping the environment to gain advantage while denying it to adversaries. Cyber superiority also helps map future conflict theaters, anticipate vulnerabilities, contest enemy actions, and establish deterrence – a particularly complex task in cyberspace where attribution and retaliation are difficult. It's key for enhancing situational awareness and attribution, allowing attacked countries to impose swift, costly, and transparent consequences. Of course, this necessitates continuous technological innovation across all aspects of military development.
If the new U.S. (and ideally, Western) posture succeeds in enhancing predictability, the international community might eventually agree on constraining rules and boost cooperation against non-state actors. However, security in and around cyberspace will likely remain volatile due to conflicting national interests and ideological differences. The ongoing confrontation between the West and powers like Russia, North Korea, China, and Iran impacts global stability profoundly. We live in an age of latent cyber conflict, leading to a classic security paradox: each player's quest for greater security ultimately creates a more unpredictable environment for all.
A key feature of cyberspace is the "hand-in-hand" development of offensive and defensive capabilities. Effective defense of national ICT networks requires understanding attack methods and achieving some degree of cyber superiority, at least within one's own networks. Since cyber incidents demand immediate responses, mapping the battlefield before conflict erupts is crucial. This involves conducting intelligence, surveillance, and reconnaissance (ISR) operations against potential enemies' networks – operations easily perceived as military. Because cyber arsenals are secret (relying on zero-day vulnerabilities), states use clandestine, deniable actions to signal offensive capabilities, aiming to deter enemies by demonstrating readiness to respond "in kind." Malware found in critical infrastructure worldwide, for instance, can be seen as a weapon planted to signal readiness to strike.
Cyber weapons risk obsolescence once vulnerabilities are patched, creating "use it or lose it" dilemmas. They can also be reverse-engineered, increasing proliferation among state and non-state actors. This exacerbates the security paradox daily. The most immediate risk is that complex cyber interdependencies multiply opportunities for cross-domain escalations. Difficulty in assessing relative strengths, differing threat perceptions, and varied escalation ladders could all fuel unintended escalation. A cyberattack could degrade an adversary's situational awareness, blinding early-warning systems and Command & Control capabilities, potentially threatening nuclear strategic stability.
This is due to the increased interconnectedness of nuclear and non-nuclear systems. An attack on ISR and nuclear early-warning systems could heighten the risk of nuclear overreaction by complicating threat assessment and impairing retaliatory capabilities. This could favor "hair-trigger" readiness and lower-level decision-making, making a disarming cyber strike a viable option. At the same time, to strengthen deterrence, many states consider retaliating in different domains, making cross-domain escalation an intended outcome.
This need to maintain cyber superiority drives the global decoupling of ICT supply chains, barrier-building against technology transfer, and national safeguards against foreign acquisition of tech products. This isn't just a Western issue; China, for example, is replacing all public body hardware and software with domestically produced technology. This decoupling responds to insidious attacks on hardware and software integrity, and the threat of espionage targeting confidential information, trade secrets, or public opinion manipulation. It also prevents dependence on foreign providers who might use control over tech to exert political or military pressure. Even the global internet is segmenting, with China's "Great Firewall" and Russia's ability to segregate its networks. These developments intensify the very competition that created them.
The Algorithmic Warfare Future
Cyber-enabled information warfare is already a key instrument in international confrontation. Hostile actors leverage Computer Network Operations (CNOs) and "computational propaganda" to influence public opinion far beyond old PsyOps' reach. These generative AI tools create infinite automated scripts (bots) to populate social media, interact with real users, employ social engineering, reroute data, launch DDoS attacks, attack supply chains, and infiltrate networks to steal, modify, or expose information. The goal is to manipulate content, deceive, distract, and disinform, muddying the waters with competing narratives to disorient the public or shape specific target audiences. The deepest fault lines in Great Power Competition may emerge around the idea of a free, uncensored, global internet. For some, it's essential for fundamental rights; for others, an existential threat to political stability.
These trends fuel mutual distrust, hindering international cooperation on global challenges like climate change, terrorism, and health security. The net effect is a reinforcement of the idea that strong sovereign states are the safest path. This is paradoxical: the internet was born to connect people across borders, with states sometimes seen as "special guests." Instead, it's become a destabilizing arena, contributing to the crisis of multilateralism. Liberal democracies face the added burden of preserving their technological edge for deterrence while upholding internet freedom and human-centric values.
We're shifting from a "don't play to win" nuclear security paradigm to one where "persistent engagement" is necessary to avoid losing in cyberspace. While enhancing predictability is a common goal, persistent operational engagement in this "domain of ambiguity" might be the only way to bolster deterrence. However, the militarization of cyberspace, the blurred lines between intelligence and military campaigns, widespread cyber-enabled information warfare, the secrecy of cyber arsenals, and the massive security paradox from national quests for cyber superiority all erode trust and threaten international stability. This increases the risk of misinterpretations, miscalculations, and unintended escalation.
Technological superiority defines today's Great Power Competition. Cyberspace, though 40 years old, is more than technology; it's a man-made domain, a political space, a ubiquitous nervous system connecting all dimensions of life. It's an environment of "unthinkable complexity" with countless players. The advent of cyberspace was a game-changer whose implications we only partly grasp. While cyberspace is more than technology, innovation is key to superiority, and AI promises a leap in cyber capabilities. As researchers Mariarosaria Taddeo and Luciano Floridi noted, "cyberspace is a domain of warfare, and AI is a new defense capability."
AI is arguably one of this century's most critical global issues. Major investments in countries like the U.S., China, and Russia signal an ongoing arms race for AI dominance, as Putin famously declared: "Whoever becomes the leader in this sphere will become the ruler of the world." Exponential growth in computer-processing power has enabled machine learning techniques to perpetually train on the "big data" generated by new sensors and the Internet of Things. Cloud technology allows quick access to off-board processing and data, making autonomous systems pervasive and capable. Our increasing reliance on AI-enabled systems opens new attack vectors for stealing information, disrupting decision-making, inciting fear, or manipulating public debate.
China's AI progress, in particular, worries the West, signaling Beijing's rising power and military deterrent. For China, this tech race is a tool to disrupt the "unipolar" liberal democratic order. Autocratic regimes are challenging Western tech superiority by leveraging private sector control and long-term planning. China aims to be the "premier global AI innovation center" by 2030, complete military modernization by 2035, and become a "world-class" military by 2049. China's leadership views AI leadership as critical for future military and economic power, seeking to reduce dependence on foreign technology.
AI will introduce a new generation of threats, transforming conflict into "algorithmic warfare" (U.S.) or "intelligentized" warfare (China). Predicting AI's impact on international order and the balance of power is hard due to many variables and unexpected emergent behaviors from combined technologies. Long-term power shifts from AI might be indirect, through economic applications. If AI is commercially driven, advances might spread rapidly to militaries, reducing asymmetry. If defense research-based, early adopters gain an advantage. Conversely, commercial interests might hold back military development by siphoning human resources with higher salaries.
Open AI initiatives and private sector roles in military power are uncertain. Could collaborative engagement, enhanced by swarm technologies and agentic AI, empower adversary groups to inflict massive damage? We simply don't know the future surprises that could trigger unintended escalations. Wealthier economies might invest more in research, gaining an initial tech superiority and attracting human capital from poorer countries. Automation, however, reduces the importance of population size for military power, as robotics and swarm technologies depend more on financial resources. AI-enabled automation and swarm technologies could reverse the trend of expensive, tech-intensive weapons, enabling new tactics with cheaper drones and autonomous systems. These same technologies could give non-state actors powerful, accessible tools for mass disruption, increasing unpredictability.
AI could soon give adversaries an overwhelming military advantage from being first to field it ("technological surprise"), making preemptive strategies rational but destabilizing. Fear of surprise incentivizes fielding new military responses without proper testing, a norm in cyberspace to shorten time to market. As we rely more on AI and AI-enabled automation for weapons, the risk of employing autonomous lethal weapons without proper testing or consideration for international humanitarian law increases. Conversely, meticulous security-by-design could lead AI systems to aggressive self-defense, resulting in escalating brinkmanship. Finally, asymmetric military advantage might come not from mastering a technology first, but from being the only side willing to weaponize and employ it first due to differing moral compasses.
Overall, AI developments will likely make the balance of power harder to assess and maintain. Ambiguous threat perceptions heighten risks of disastrous actions. While the speed and automation of cyber warfare with AI are hard to predict, AI-enabled offensive capabilities are already used for reconnaissance, network penetration, profiling, and targeted advertising. AI can play a role in all 19 use cases for cyberattacks, from attack planning to self-learning malware.
AI will enhance computer system robustness, improve situational awareness and anomaly detection, and enable autonomous defense, potentially even self-directed retaliatory strikes. Cyber defense strategies will increasingly leverage AI, elevating cyber dueling into algorithmic warfare. We might see autonomous malware replicating and adapting to inflict maximum damage. Algorithmic warfare will speed up battles, as machines analyze vast, fast data flows from multiple sources (audio, video, text) to find patterns invisible to humans. Automation will fundamentally shift future warfare. The concept of "hyperwar" describes this accelerated operational tempo, where AI and machine cognition collapse decision-action cycles to fractions of a second. Decision-makers will face existential choices at this hyper-tempo, and "use it or lose it" dilemmas will favor offense. Automation and hyperwar multiply ambiguity, compressing time for situational awareness.
In the military, machines are increasingly tasked with automatically responding to high-speed threats. AI will enable deep machine learning systems, powered by neural networks and trained with big datasets, into every battle network grid. These will speed up grid operations against cyber, electronic warfare, and space attacks, allowing faster coordination between manned and unmanned systems and seamless multi-domain operations. While we're unprepared for such wars, we also seem to struggle with their strategic, operational, and moral implications. AI can assist decision-making by processing huge data, reducing ambiguity. But we lack clarity on ensuring AI-assisted decision reliability: could opponents corrupt data to hack outcomes? We also don't know if accelerated warfare will force humans out of decision-making. If autonomy becomes critical for military superiority, an international race to remove "humans out of the loop" might disregard moral values. It raises the question of whether there will be time for human common sense in critical situations, like the Russian Colonel Petrov's refusal to launch nuclear retaliation in 1983 due to a technical glitch. Human-machine interaction is needed to supervise AI decision-making that may be "cognitively inaccessible" to us.
An essential feature of algorithmic warfare will be the automated research and immediate exploitation of vulnerabilities in adversarial AI systems to achieve supremacy. AI automates and accelerates cyber warfare, elevating the threat of cyber-enabled information warfare. AI will enable much more detailed profiling of potential targets ("individualized warfare") and the generation of deepfakes to manipulate public debate, creating competing narratives that could paralyze decision-making or undermine support during existential threats.
AI can transform international order and erode traditional nuclear deterrence. The atomic age's balance of power relies on nuclear survivability and Mutually Assured Destruction (MAD). AI allows real-time integration of big data analytics and advanced surveillance, making it easier to detect deployed strategic forces. Coupled with increasing weapon accuracy, speed, autonomy, and swarm technology (also AI-powered), these developments threaten the survivability of nuclear deterrents, posing new dilemmas for nuclear strategic stability. While nuclear deterrence isn't over, policymakers must realize that foundational assumptions regulating international order for decades may crumble, marking the end of "easy survivability" and the start of "vulnerability." Nuclear stability could also be disrupted if algorithms make mistakes, misinterpreting threats and misleading decision-makers into unintended escalation, or if policymakers fail to understand AI's strategic implications. In an AI-informed future, we risk a false sense of superiority, underestimating threats, or favoring aggressive strategies due to biased judgment or a desire for surprise.
Conclusion
The pervasive nature of the cyber domain makes entanglement its defining feature. The historical Soviet concept of "cybernetics" as a science intersecting exact, social, and natural sciences, exploring information management in complex systems, machines, and societies, perfectly captures the strategic relevance of cutting-edge AI developments. AI integrates algorithmic decision-making into the pervasive cyber domain, accelerating its seamless overlap with the physical world. AI, in this sense, acts as a powerful force built upon the foundation of cyberspace, inevitably shaped by its basic building blocks, rules, and operating principles while also helping to define them. This explains why AI developments are bound to replicate and fast-track the competition for cyber superiority, and why this AI race is so crucial to today's Great Power Competition.
The race for AI leadership and the ongoing engagement in the cyber domain to attain cyber supremacy are two sides of the same coin: it's impossible to achieve "cyber superiority" while being "AI inferior," or to win in algorithmic warfare while being incapable of defending networks and data. The security paradox in cyberspace will inevitably be reinforced by AI, escalating internet segmentation, the fierce confrontation of narratives already happening throughout the networks, global supply chain decoupling, rising international distrust, and the risk of misinterpretations, miscalculations, and unintended escalation to the conventional domain. Cross-domain escalations, in particular, seem particularly worrisome, given the entanglement resulting from the multitude of public and private stakeholders sharing the same hardware and software infrastructure and the same tactics, techniques, and procedures.
What's more troubling is that if we're unprepared for security issues from the cyber domain's emergence, we seem utterly unprepared to cope with the cognitive complexity AI will introduce. From an international order perspective, AI security has mostly been viewed through International Humanitarian Law (IHL), focusing on principles like distinction, necessity, proportionality, predictability, reliability, transparency, and lack of bias. But autonomous AI decision-making enormously complicates this effort. IHL is only a small part of the issue. Cyber confrontations are usually below the threshold of force and not necessarily military in nature. Protecting civilians from autonomous lethal weapons is vital, but less relevant for AI's impact on fundamental human rights at the domestic level due to the international tech supremacy competition.
Cyberspace's emergence forced a paradigm shift in national security. States are pressured to protect citizens and companies, who are unaccustomed to being targets of daily skirmishes, sophisticated malicious campaigns, or subtle influence operations. The more we rely on AI-enabled systems, online and offline, the more cyber stability will impact daily life. Our freedom and security will increasingly depend on the secure and stable cyberspace and AI. For liberal democracies, this poses numerous challenges:
- Preserving Technological Superiority: This is necessary for deterrence and defense in a volatile, unpredictable future shaped by AI and algorithmic warfare. It requires deep understanding of how cyber confrontation links political, military, economic, and sociological spheres – essentially, updating a "Mr. X telegram" for cyberspace. This is critical for enhancing mutual understanding of deterrence postures in cyberspace, developing confidence-building measures, drawing clear red lines for retaliation, and managing cross-domain escalation risks. This aligns with the broader goal of responsible open AI development and security.
- Defending a Free and Global Internet: Against claims of national sovereign jurisdiction and autocratic "digital sovereignty" interpretations. This is where the deepest fault lines appear: for democracies, internet freedom is essential for fundamental rights; for others, an existential threat. This fundamental difference is unlikely to be reconciled, making the defense of this "new iron curtain" crucial for our freedom.
- Promoting Transnational, Multi-stakeholder Debate: Recognizing the new political reality of cyberspace and AI's impact on daily life. This is a precondition for defining core values, political priorities, mobilizing resources, and setting red lines, domestically, within alliances, and with potential adversaries. This report is a small contribution to that vital discussion.








