Isaac Asimov’s classic I, Robot (1950) insightfully explored the ethical and moral implications of robotics and artificial intelligence. The book’s interconnected stories about robots guided by the Three Laws of Robotics highlight the challenges in ensuring AI’s safety and alignment with human values. Asimov illustrates the limitations of these laws and AI’s unpredictable nature, with quotes like, “You can’t argue with a robot. They’re terribly rational”, reflecting concerns about AI acting against human interests. The book forces one to ponder about the dangers of AI being manipulated by adversaries, potentially turning against humans. As a fictional yet prescient work, I, Robot underscores the importance of ethical standards and security measures in AI development to prevent it from undermining national sovereignty.

Since algorithms are devoid of national allegiance or moral judgement, the challenge before the world is not just to develop this technology but to develop the frameworks to ensure it serves humanity and not the other way around. The lack of a unified global framework for overseeing AI, coupled with the absence of national-level regulatory measures, poses a significant risk to national security and sovereignty in four distinct ways.

First, AI is reshaping traditional notions of sovereignty, challenging the power dynamics between states, private technology companies, and individuals. As AI systems become more autonomous, they are creating new digital spaces that are not governed by traditional laws or state control. These digital spaces, defined by code and data, can be seen as new forms of sovereignty where power is wielded by those who control the AI systems.

The emergence of AI has ushered in a new era of digital sovereignty, fundamentally altering the concept of territorial sovereignty. This transition impacts how nations control their digital domains and AI technologies. Countries lacking in AI development and regulation may find themselves reliant on more advanced nations, risking their sovereignty across sectors like defence, infrastructure, and healthcare. Additionally, the rise of AI shifts power from states to private tech companies and individuals who dominate these digital spaces. Though these entities don’t possess traditional sovereignty, their influence challenges state authority and could reshape global political dynamics.

Second, AI has the potential to significantly impact democracy, particularly when leveraged by foreign powers. It can be used to manipulate information and influence public opinion, which is a critical aspect of democratic societies. For example, AI can generate disinformation and misinformation at scale, which can trigger tensions and even electoral-related conflict and violence. Such AI-driven false information can spread biases or opinions that do not represent public sentiment, thereby affecting the democratic process negatively.

Moreover, foreign powers can utilise AI to conduct influence campaigns that are more sophisticated and less detectable. These campaigns can exacerbate divisions within societies, seed nihilism about the existence of objective truth, and weaken democratic systems from within. The borderless nature of AI makes it difficult to control or regulate, and as AI technology becomes more advanced, it could be used by authoritarian regimes, terrorist groups, and organised crime groups to cause great harm.

Third, Lethal Autonomous Weapons Systems (LAWS), often termed “killer robots”, present a profound threat to national security and sovereignty, raising critical ethical and technical challenges. From a technical standpoint, LAWS, equipped with advanced AI algorithms, can independently search for, identify, and engage targets without human intervention. This capability poses a risk of unintended escalation in military conflicts, as these systems might act based on pre-programmed criteria, lacking human judgement and context awareness, potentially leading to indiscriminate or erroneous targeting. Moreover, the risk of hacking or malfunctioning of these systems could result in catastrophic incidents, undermining national security.

Ethically, the deployment of LAWS challenges the fundamental principles of humanitarian law and responsibility. The absence of human oversight in the decision-making process of life and death raises significant moral questions about accountability. The principle of distinction, a cornerstone of international humanitarian law, mandates the differentiation between combatants and non-combatants. LAWS, reliant on algorithms for decision-making, may lack the nuanced understanding necessary to make these distinctions, risking civilian lives and violating international norms.

The proliferation of LAWS could lead to an arms race, destabilising international peace and security. As these weapons become more accessible, the barrier to entering conflicts lowers, potentially leading to increased warfare and undermining national sovereignty. The lack of regulation and control over LAWS also presents a threat to the global order, as non-state actors might acquire and use these systems for terrorism or insurgency. This autonomy undermines deterrence theory, which relies on rational human actors to maintain balance and avoid conflict through the threat of retaliation. The unpredictability of LAWS disrupts this balance, potentially leading to uncontrolled escalations. Furthermore, the prospect of an arms race in autonomous weapons technology threatens global stability as nations prioritise technological advancement over diplomatic and strategic equilibrium.

Fourth, the integration of AI into cybersecurity represents a dual-edged sword, where its potential for sophisticated cyberattacks directly threatens national security and sovereignty. AI-enhanced methods, such as advanced persistent threats and spear phishing, can penetrate and disrupt critical national infrastructures, undermining the stability and functioning of a state. Such disruptions not only pose immediate security risks but also threaten the economic and social well-being of nations. These threats extend beyond physical borders, as cyberattacks can originate from any location, making safeguarding national interests in the increasingly interconnected digital world challenging. The implications for national security are profound, requiring nations to reassess their cybersecurity strategies and invest in advanced defences. Safeguarding against these AI-enhanced threats is crucial for maintaining national sovereignty, ensuring that states retain control over their critical infrastructure, information systems, and the democratic processes that define their governance.

In light of these challenges, it becomes evident that the world urgently requires a robust global AI governance body. Such an entity is essential to ensure that AI advancements serve humanity’s broader interests, rather than the contrary. Alongside this, there is a critical need for evolving national-level regulations tailored to address AI’s unique threats to sovereignty and national security.

Debroy is chairman and Sinha is OSD, Research, Economic Advisory Council to the Prime Minister. Views are personal

Weeks before scheme scrapped, Govt printed EBs worth Rs 8,350crSubscriber Only

UPSC Key, February 28: What to read today and whySubscriber Only

Is Lord Krishna’s Dwarka under water? Legends of a lostSubscriber Only

Sanjay Srivastava writes: Byju’s fall and the failure of fast-foodSubscriber Only

Govt depts red-flag high duties to curb China importsSubscriber Only

UPSC Key, February 27: What to read today and whySubscriber Only

Beyond Sita and Akbar, zoos cheered Ram, Mumtaz, AzadiSubscriber Only

Household Consumption Expenditure Survey: How Indians spendSubscriber Only

Christophe Jaffrelot writes: A new Rahul Gandhi, an old CongressSubscriber Only

QOSHE - From democracy to arms race, artificial intelligence poses significant risks - Bibek Debroy
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

From democracy to arms race, artificial intelligence poses significant risks

12 2
29.02.2024

Isaac Asimov’s classic I, Robot (1950) insightfully explored the ethical and moral implications of robotics and artificial intelligence. The book’s interconnected stories about robots guided by the Three Laws of Robotics highlight the challenges in ensuring AI’s safety and alignment with human values. Asimov illustrates the limitations of these laws and AI’s unpredictable nature, with quotes like, “You can’t argue with a robot. They’re terribly rational”, reflecting concerns about AI acting against human interests. The book forces one to ponder about the dangers of AI being manipulated by adversaries, potentially turning against humans. As a fictional yet prescient work, I, Robot underscores the importance of ethical standards and security measures in AI development to prevent it from undermining national sovereignty.

Since algorithms are devoid of national allegiance or moral judgement, the challenge before the world is not just to develop this technology but to develop the frameworks to ensure it serves humanity and not the other way around. The lack of a unified global framework for overseeing AI, coupled with the absence of national-level regulatory measures, poses a significant risk to national security and sovereignty in four distinct ways.

First, AI is reshaping traditional notions of sovereignty, challenging the power dynamics between states, private technology companies, and individuals. As AI systems become more autonomous, they are creating new digital spaces that are not governed by traditional laws or state control. These digital spaces, defined by code and data, can be seen as new forms of sovereignty where power is wielded by those who control the AI systems.

The emergence of AI has ushered in a new era of digital sovereignty, fundamentally altering the concept of territorial sovereignty. This transition impacts how nations control their digital domains and AI technologies. Countries lacking in AI development and regulation may find themselves reliant on........

© Indian Express


Get it on Google Play