NatStrat
Print Share

The Geopolitics of Artificial Intelligence

  • Technology & Economy
  • Feb 19, 2025
  • 8 min read
Artificial Intelligence,  Geopolitics,  Cyberspace

Artificial Intelligence

Pavithran Rajan
Pavithran Rajan - Military veteran, Tech entrepreneur & Academic

By embracing an independent path, India can harness the potential of AI to create a secure nation that contributes to an equitable and secure global order. If we fail to do so, we risk reinforcing existing inequalities and entering an era of unchecked digital imperialism and authoritarianism.

Introduction

Geopolitics in cyberspace is a logical fallout of a post-Snowden world. This is a constantly  shifting terrain shaped by economic competition, technological innovation and the relentless pursuit of dominance. AI (artificial intelligence) is at the heart of this transformation: a force-multiplier that amplifies existing power structures while introducing novel challenges during peace and war. The unrelenting competition for dominance between great powers and their corporations tests governance, ethics, and human agency limits. As AI becomes increasingly embedded in cyberspace, it reshapes the rules of engagement in ways that are as profound as they are unpredictable.

This evolution echoes Halford Mackinder. Heartland Theory which emphasises control over geographical spaces as a determinant of global power. Today, however, not land but data sovereignty and technological ecosystems increasingly determine supremacy. Scholars such as Joseph Nye in The Future of Power argue that cyber power is an extension of both soft and hard power, blurring the lines between state and non-state actors. It is undoubtedly the strategic high ground of the Information Age.

The AI-Driven World

The assertion of state sovereignty over cyberspace has taken on new dimensions with the advent of AI. Governments, both democratic and authoritarian, are deploying surveillance systems and automated content moderation tools embedded with AI to monitor and manipulate their populations.

In China, AI powers the social credit system as a form of population control. The social credit system creates a feedback loop of compliance or punishment that enforces rules precisely at granular levels of society. Such a system wherein behaviour modification is curated by constant surveillance is a dystopian state beyond Orwellian imagination. The state consolidates its power domestically by analysing vast datasets on the behaviour of citizens. China now exports this techno-authoritarianism model to other nations eager to replicate its efficiency.

Yet this consolidation of power through surveillance also breeds its new set of vulnerabilities. AI is dependent on vast amounts of data and complex supply chains. These are susceptible to manipulation and sabotage. Shoshana Zuboff’s seminal book The Age of Surveillance Capitalism, highlights the power asymmetry between those who control AI and those subjected to it.

India, the crucial swing state with its latent power potential at the centre of US-China rivalry, must balance unquestioningly embracing technological advancement and safeguarding digital sovereignty. India’s Digital Personal Data Protection Act (DPDP Act 2023) is a significant step towards this balance. The act aims to regulate the collection, storage, and use of personal data, thereby protecting the digital rights and privacy of Indian citizens. However, the DPDP Act is still a work in progress compared to the EU’s General Data Protection Regulation (GDPR). If not correctly curated, it can condemn Indians to be perpetual second-class global citizens under constant foreign surveillance.

Invisible Gatekeeper

Integrating AI into critical information infrastructure (CII) intensifies competition for control over these ICT choke points. AI-driven systems enhance subsea cables, cloud platforms, and 5G networks. Although AI optimises performance, detects anomalies, and responds to real-time threats, it also brings the dangers of foreign surveillance to nations using technologies. Adoption of these comes with inherent risks that are not visible to large sections of the population in the Global South. China and the US have weaponised their ICT ecosystems through legislation in the form of extraterritorial jurisdiction such as the US CLOUD Act, FISA, and China’s National Security Laws and National Intelligence Laws. This forces smaller nations to choose alliances, often at the cost of sovereignty and economic subjugation that are not easily visible.

AI in Contemporary Warfare

The Russia-Ukraine War illustrates that AI is no longer an auxiliary tool but an integral component of military strategy, psychological operations and asymmetric warfare.

Russia’s war in Ukraine has seen the rapid evolution of AI-driven cyber warfare. Unlike traditional hacking operations focused on espionage or economic sabotage, Russia has leveraged AI-enhanced cyber warfare to achieve military, strategic and psychological goals. AI-powered deepfake videos, automated troll farms, and generative disinformation campaigns have been used to destabilise Ukrainian morale, confuse military command structures and influence global narratives.

Deepfake videos of Ukrainian President Volodymyr Zelensky appearing to surrender surfaced in March 2022, aiming to disrupt Ukrainian resistance. Russian and Ukrainian AI-driven bot networks flooded social media with manipulated videos to create fake narratives of war crimes, shaping international responses. Disinformation operations using AI are more scalable and undetectable, making them a multiplier in hybrid warfare. AI has accelerated the speed, precision and scale of cyberattacks, making cyber warfare a battlefield equal to land, air and sea domains.

Ukraine has used AI-powered tools for data fusion. Analysing satellite imagery and drone footage, enabling quicker identification of equipment deployments and battlefield changes. Simulations have been used to model potential outcomes of military engagements, helping commanders make faster troop movements and resource allocation decisions. Ukraine has employed AI to detect and counter Russian disinformation campaigns, analysing social media posts, news articles, and other content to identify false narratives and track their spread. Experimental AI-powered ground robots have been tested for tasks like mine clearance and surveillance, though their deployment in Ukraine is limited.

Incorporating AI-driven cyber warfare and autonomous military systems into conventional warfare transforms the nature of conflicts. These illustrate that AI has evolved from a tool to an active weapon. Military forces increasingly depend on AI as a combat asset. Utilising AI-powered drones for kinetic action to automated deepfakes for cognitive effects. The use of AI in warfare should make us all cautious and aware of its implications.

Legal and Ethical Dilemmas of AI

While AI enhances national security, it raises significant ethical concerns that demand our attention and engagement.

Who is accountable for autonomous AI-driven cyberattacks?

Determining responsibility becomes a legal quagmire when an AI system independently launches a cyber operation. Can AI-generated misinformation be legally prosecuted?

Deepfakes and AI-generated propaganda blur the line between free speech and malicious intent, complicating efforts to regulate harmful content. What frameworks govern international cyber warfare?

The use of AI in military operations raises significant ethical questions. Autonomous weapon systems, popularly called killer robots, challenge traditional concepts of accountability. For example, if an AI-powered platform targets civilians, it becomes unclear who will be held responsible: the OEM (original equipment manufacturer), the controller or the state. Further, the application of AI in psychological operations (PSYOPS), which are operations to convey selected information to audiences to influence their emotions and objective reasoning, exacerbates ethical considerations. AI-generated propaganda also manipulates public opinion, undermining trust in democratic institutions and democracy.

Global AI Governance Framework 

The Geopolitics of Artificial Intelligence

 

AI presents exciting opportunities for establishing a global governance framework. The United Nations Global Cybercrime Treaty initially encountered challenges due to differing viewpoints among the United States, China, and the European Union. Each region brings unique perspectives: for instance, China emphasises state control, while the EU prioritises individual rights. This treaty was finally adopted in 2024 and is officially titled the Convention on Cybercrime: Strengthening International Cooperation to Combat Crimes Committed Through ICT Systems. With the potential of AI, this treaty can serve as a crucial step towards creating a more secure and ethical digital world.

Building on GDPR’s privacy framework, the EU has enacted an AI Act (EU AIA), the first comprehensive law dedicated to AI and will regulate Artificial Intelligence (AI) in the EU by 2026. However, its reach will extend beyond the EU and affect more than tech companies. This pioneering effort encourages hope for future global cooperation in regulating AI, aiming to create a more innovative and safer world. These will spur similar legislation in other countries too.

The Global South’s AI Struggle

The rapid expansion of AI has widened the digital divide, creating new forms of dependency. Major tech companies from the US and China dominate the AI landscape. Countries in South Asia, Africa and Latin America often remain passive consumers of this technology. Except for India, many regions lack the human resources to develop indigenous AI solutions.

AI-driven facial recognition and surveillance technologies developed in the West have disproportionately affected marginalised communities, leading to wrongful arrests and discriminatory practices. The fact that India, despite its potential, has struggled to develop its ICT industry, the building blocks needed for trusted and resilient foundations for AI, is a cautionary tale of the challenges ahead.

China’s Digital Silk Road, on the other hand, is expanding AI-powered surveillance in Africa, creating dependencies on technological authoritarianism. China exports its surveillance model to African nations through exports of its cheap telecommunications and innovative urban projects that lack adequate privacy or protection from civil liberties.

India’s “Atmanirbhar Bharat” initiative is a step toward fostering homegrown AI development and reducing dependence on foreign technologies. Despite some policy challenges, it has the potential to promote innovation and prioritise ethical considerations, ensuring that AI benefits everyone. If successful, this can become a third pillar, an alternative from the US and China for the Global South to adopt.

The Future of AI Governance

As cyber geopolitics evolves, nations must develop ethical AI strategies that:

1. Prioritise transparency in AI decision-making. Governments and corporations should disclose how AI systems are designed, trained and deployed to build public trust.

2. Mitigate algorithmic biases that reinforce inequalities. This requires diverse datasets and inclusive design processes to ensure fairness and accuracy.

3. Ensure democratic oversight of AI-driven surveillance tools. Independent bodies should monitor  the use of AI in law enforcement and national security to prevent abuse.

For larger nations like India, the future depends on controlling their AI trajectories rather than becoming digital colonies of other powers. The real challenge lies in achieving technological supremacy and fostering an AI ecosystem that balances security, innovation, and human rights.

Conclusion

The geopolitics of AI is not merely an extension of traditional geopolitics but a transformative force reshaping power, identity, and justice. DeepSeek, a Chinese LLM (Large Language Model), has substantially altered the AI landscape by lowering the AI barrier and being much cheaper than the dominant US LLMs. It is an open-source model that everyone can now use. The impact of DeepSeek was immediate, as it wiped out two trillion USD valuations from the stock markets of US firms. It also substantially altered the narrative against China from an authoritarian state seeking dominion over other states and put the focus back on the US as a state seeking to perpetuate hegemony.

DeepSeek also posed new questions to India’s IT-Enabled Services industry and some of its vocal public champions, who argued India should not invest in foundational AI models due to prohibitive costs and continue relying on US tech giants portrayed as unassailable in tech leadership. It is now clear that the integration of AI enhances existing power structures while presenting new challenges that test the limits of governance and human agency. Whether AI becomes a force for liberation or domination depends on our national choices.

As competition for dominance in cyberspace intensifies, it is evident that the stakes have never been higher, and the consequences have never been more significant. From the rise of cyber warfare to the ethical dilemmas surrounding AI governance, our decisions today will shape the future trajectory of multiple generations of Indians and impact other parts of the globe.

By embracing an independent path, India can harness the potential of AI to create a secure nation that contributes to an equitable and secure global order. If we fail to do so, we risk reinforcing existing inequalities and entering an era of unchecked digital imperialism and authoritarianism.

(Exclusive to NatStrat)


     

Related Articles