Like all major technological advances, artificial intelligence poses major challenges to the world in developing responsible use in civil and military domains. While the discussion on promoting and regulating the civilian use of AI has gained much ground in recent years, the discourse on military uses has begun to gain international traction. There is a growing global sentiment, on the one hand, for a severe limitation of the military applications of AI, especially autonomous weapons that can operate without human control. On the other hand, major powers are already investing heavily in the accelerated development of greater autonomy, based on AI, for weapons systems.

As the two trends play out, India has its tasks cut out in devising an effective national military strategy for AI and a credible international approach to limiting the dangers of these weapons.

At the beginning of this month, the UN General Assembly voted with an overwhelming margin –164 in favour, five against, and eight abstentions – to urge the international community to address the challenges presented by lethal autonomous weapons and requested the UN Secretary-General to produce a report taking into account the views of governments and civil society groups. This is the first time a UN resolution has addressed this issue; the question of autonomous weapons will be taken up again in the next annual session of the UNGA in September 2024. This marked the culmination of an important phase in the campaign by human rights and arms control activists to ban autonomous weapons. They argue that “killer robots” violate the basic principles of international laws of war and raise fundamental ethical questions about human-machine relationships in the use of force.

If you think the world will move quickly towards banning these weapons, think again. The expansive use of drones in the Ukraine war has begun to press military planners around the world to focus on the development and use of unmanned systems that have greater autonomy.

The major military powers did not vote along similar lines on the resolution. The US and its allies all voted for the resolution; China abstained, and India voted against. As always in the UNGA, how a country votes does not necessarily tell about what it means. That is where the “explanation of vote” comes in. Explanations often override the simple meaning of the vote. This is not surprising given the complex considerations that states have to juggle in multilateral fora.

If you look beyond the diplomatic positioning in the UN, the US, China and India are all engaged in developing autonomous weapons. They are very much part of the diplomatic jousting in the UN on the definition of autonomous weapons and the interpretation of their legitimacy. There is also contestation on the right forum to negotiate and of course, on what exactly will be prohibited.

As AI technologies progress rapidly, all major military powers are focusing on expanding the autonomy of their weapons systems. This summer, the US sent out a naval squadron of four uncrewed ships to sail across the Pacific — from the American West Coast to Japan. During their transit, the ships interacted with crewed US warships operating in the Pacific. The US Navy has ambitious plans to build 150 uncrewed ships in the years ahead. The US Navy, Air Force and Army, which are acquiring several drone systems, are experimenting with combined operations with manned and unmanned systems. The Pentagon is also developing new institutions to fully integrate AI into defence management. The Chief of its Digital and Artificial Intelligence Office, set up last year, is responsible for the acceleration of the Pentagon’s “adoption of data, analytics, and artificial intelligence (AI) to generate decision advantage from the boardroom to the battlefield”.

Earlier this month, US deputy secretary of defence, Kathleen Hicks, explained the new initiative for the Indo-Pacific, called the Replicator, to develop and deploy thousands of unmanned systems across all domains within the next two years. Two assumptions underline the Replicator initiative.

One is that the US can’t match, one-to-one, China’s military advantage in mass – more men, more ships, and more missiles. What it needs instead is innovation, the capacity to outthink China and develop capabilities and doctrines that can counter PLA’s advantages. The other assumption is that building many small, cheap, and easily replicated autonomous systems is the right innovation to deal with China’s growing military power. Even as it steps on the pedal, the Pentagon insists on human control over the use of autonomous weapons. Its directives demand that all autonomous systems be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force”.

Meanwhile, China is not sitting on its hands. Beijing has put AI at the centre of building an “intelligentised” PLA. It is deploying AI for various functions from inventory management, maintenance, and logistics and in developing unmanned systems for a full range of functions, including reconnaissance, surveillance, and combat. China’s massive industrial capacity and the state power to direct resources means it can turn out autonomous weapons faster than the US. The success of America’s asymmetric strategy, then, rests on staying well ahead of China in the development of AI and its integration into armed forces. The US is also trying to slow down China’s AI progress by controlling the export of high-end chips and the equipment to make them. It is also strengthening its technology partnerships with allies to race ahead of China.

For India, the negative vote at the UNGA on autonomous weapons is part of an unfolding pragmatic turn in the engagement with global issues. There was a time when Delhi would simply support calls for banning things in the name of a commitment to disarmament. Today, India is signalling a balanced approach to national security, ethics, and global governance in engaging with the military applications of AI. Given the massive military imbalance with China and the kind of challenges Delhi confronts from Beijing in both the Himalayan and maritime frontiers, AI should necessarily be an important part of India’s national defence plans. Although India has considerable strengths in AI, it is way behind the US and China in its military application.

What about India’s blossoming technological partnership with the US? To take full advantage of the US partnership in a leading sector like AI, Delhi must invest in its national capabilities. This is not just about building a few drone systems, but investing big in the core AI sciences, developing the full range of technological capabilities, operational military doctrines, and the institutions to effectively integrate AI into Indian defence management and armed forces.

Even as it builds national AI capabilities for defence, Delhi can’t give up on its tradition of shaping international norms. India had limited success in the past in developing global governance of emerging technologies. With growing technological capabilities today, it can have a bigger say in global outcomes by working with like-minded countries on the responsible military use of AI and ensuring that humans remain in the loop when using of autonomous weapons.

The writer is senior fellow, Asia Society Policy Institute, Delhi and contributing editor on international affairs for The Indian Express

QOSHE - C Raja Mohan writes: India, 'killer robots' and the China challenge - C. Raja Mohan
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

C Raja Mohan writes: India, 'killer robots' and the China challenge

22 0
14.11.2023

Like all major technological advances, artificial intelligence poses major challenges to the world in developing responsible use in civil and military domains. While the discussion on promoting and regulating the civilian use of AI has gained much ground in recent years, the discourse on military uses has begun to gain international traction. There is a growing global sentiment, on the one hand, for a severe limitation of the military applications of AI, especially autonomous weapons that can operate without human control. On the other hand, major powers are already investing heavily in the accelerated development of greater autonomy, based on AI, for weapons systems.

As the two trends play out, India has its tasks cut out in devising an effective national military strategy for AI and a credible international approach to limiting the dangers of these weapons.

At the beginning of this month, the UN General Assembly voted with an overwhelming margin –164 in favour, five against, and eight abstentions – to urge the international community to address the challenges presented by lethal autonomous weapons and requested the UN Secretary-General to produce a report taking into account the views of governments and civil society groups. This is the first time a UN resolution has addressed this issue; the question of autonomous weapons will be taken up again in the next annual session of the UNGA in September 2024. This marked the culmination of an important phase in the campaign by human rights and arms control activists to ban autonomous weapons. They argue that “killer robots” violate the basic principles of international laws of war and raise fundamental ethical questions about human-machine relationships in the use of force.

If you think the world will move quickly towards banning these weapons, think again. The expansive use of drones in the Ukraine........

© Indian Express


Get it on Google Play