NatStrat
Print Share

Why India urgently needs a legal framework to regulate Artificial Intelligence

  • Technology & Economy
  • 2 d ago
  • 6 min read
Artificial Intelligence,  Digital India Bill,  Bletchley Declaration

Representative image | Medium

Rajinder Kumar Vij
Rajinder Kumar Vij - Indian Police Service (Retd.)

Introduction

The growing use of artificial intelligence (AI) systems across the world have raised reasonable concerns about the risk of undermining human rights, democracy and the rule of law. Whereas there is a need to harness the benefits of AI by promoting digital literacy, establishing an acceptable legal framework is essential to regulate AI, particularly its misuse.

There are currently no specific laws regulating AI in India. However, a clutch of new regulations and agreements have come up at sovereign and multilateral forums to oversee AI tools, including the G7 pact on AI (October 2023), the Bletchley Declaration (October 2023) and the EC’s AI Act (March 2024).

AI Policy around the World

The G7 pact on AI is called the International Code of Conduct for Organizations Developing Advanced AI Systems. The 11-point code aims to ‘promote safe, secure, and trustworthy AI worldwide’ through ‘voluntary guidance for actions by organisations developing the most advanced systems.’ These include generative AI applications like ChatGPT and the foundation models they are built on.

The US issued an executive order on AI on 30th October 2023 which largely relies on industry self-regulation. Developers will be directed to put powerful AI models through safety tests and submit results to the government before their public release. The initiative also creates infrastructure for watermarking standards for AI-generated content, such as audio or images, often referred to as ‘deepfakes’. A total 15 companies have signed on to the commitment.

In November 2023, India joined 27 other countries, including the US and China, in adopting the Bletchley Declaration, a commitment from the recent UK-led AI Safety Summit that acknowledges the need for a global alliance to combat AI-related risks such as disinformation. The signatories vowed to ‘work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe and supports the good of all.’

The US joined the UK in announcing the creation of its own AI safety institute.

However, the European Union’s AI Act (2024), enacted in March 2024, took a more prospective approach of classifying AI systems based on risk perceptions and imposing graded regulatory requirements accordingly. Some AI uses are banned because they are deemed to pose an unacceptable risk, like social scoring systems that govern how people behave. High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users. There are also punitive provisions such as fines which may vary from 30 million euros or 6% of the total worldwide annual turnover, depending on the severity of the infringement.

However, the first legally binding international treaty, the ‘Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law’ was signed on 5th September 2024 by the US, the EU and the UK. The treaty adopts a risk-based approach to the design, development, use and decommissioning of AI systems. The signatories will be accountable for ‘any harmful and discriminatory outcomes of AI systems’, and will ensure that ‘outputs of such systems respect equality and privacy rights, and that victims of AI-related rights violations have legal recourse. Even though the treaty is being called ‘legally binding’, it does not contain provisions for punitive sanctions such as penalties and fines. Compliance is primarily ensured through ‘monitoring’ which is not much of a deterrent from an enforcement point of view.

The recent summit on Responsible Use of AI in the Military Domain (REAIM) in Seoul (September, 2024) to regulate global norms on the military applications of AI is pertinent. The US introduced a resolution on the “responsible use” of AI by the armed forces. Allegedly, Israel used AI-based programmes to detect and strike suspected operatives of the military group Hamas. 

Misinformation, Disinformation and Fake News

While the world is struggling to regulate AI through varying interventions, the current legal landscape is mostly occupied with issues such as violation of privacy rights, spreading biased or false information, ownership of the AI-generated contents and the role of intermediaries in checking crime.

Italian Prime Minister Giorgia Meloni demanded one lakh euros as compensation for her derogatory videos made using deepfake technology. Hollywood actress Scarlett Johansson accused OpenAI of cloning her voice without permission and violating her personality rights. In India, in May 2024, the Delhi High Court protected the personality and publicity rights of actor Jackie Shroff while restraining various e-commerce stores, AI chatbots, among others from misusing the actor’s name, image, voice and likeness without his consent.

The issue of spreading biased, false or misleading information (to undermine democratic process or otherwise) using AI tools remains a matter of serious concern. The MeitY issued an advisory in March 2024 saying that all AI models, large language models (LLMs), software using generative AI or any algorithms that are currently being tested or are unreliable in any form must seek “explicit permission of the government of India” before being deployed for users on the Indian internet.

AI & LLMs

Representative image | LinkedIn

The Ministry asked all platforms to ensure that “their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process” by the use of AI, generative AI, LLMs or any such other algorithm. The Advisory, however, was later withdrawn after it was criticised for being vague and not legally sound.

Case Studies

In a recent case of LLM vs. LLM (2024), the Punjab and Haryana High Court issued a notice to the OP Jindal Global University which disqualified a student on the ground that his exam submissions were AI-generated which amounted to using ‘unfair means’. The petitioner defended his action by saying that even if he had incorporated AI assistance, he retained ownership rights under the Copyright Act, 1957. This case could set a new precedent on the ownership of the AI-generated contents.

Earlier, in a case of the AI-based painting app Raghav, while Sahni’s first application for listing Raghav as the sole author was rejected by the copyright office, his second application naming himself and Raghav as co-authors was initially approved for registration, but later overturned saying that it had mistakenly granted the registration and that the work had “non-human authorship” that had not been taken into account.

The co-founder of the messaging platform Telegram, Pavel Durov, was arrested in Paris in August 2024 over allegations of inadequate efforts to combat crime on the app, including the spread of child sexual abuse material. However, after release he said “it is absurd to claim that a platform or its owner are responsible for abuse of that platform.” This again ignited debate on the role of intermediaries in checking crime and assisting enforcement agencies.

The courts are using AI tools in various capacities. The Manipur High Court in a case used ChatGPT to understand the concept of Village Defence Force (VDF) which it used in its ruling. The Punjab and Haryana High Court used ChatGPT to supplement its reasoning about ‘jurisprudence on bail in cases of assault with an element of cruelty’ before deciding on a bail matter. In August 2023, the Delhi High Court held that ChatGPT cannot be used to decide ‘legal or factual issues in a court of law’.

In December 2023, the UK judiciary released a set of guidelines about the use of generative AI in courts. Judges were allowed to use ChatGPT for basic tasks such as summarising large bodies of text, making presentations, or composing emails, but they were cautioned not to rely on AI for legal research or analysis. No such guidelines have been issued in India.

AI has great potential to transform the way police forces prevent, investigate and even detect crime. AI can not only sort and analyse crime data faster, it can also identify patterns and links using the legacy data and make reliable predictions. AI can also enhance image and video clarity. Reportedly, the London police have used advanced AI-powered cameras to conduct facial recognition scans to arrest some suspects. In India, the Karnataka Police has introduced “AI-enabled policing technology” to provide quick redressal of issues with multi-dimensional analysis of the incident, integrating it with the existing CCTNS.

In June 2024, the IMF issued an AI preparedness index in which India ranked 72 among 174 countries. Whereas Singapore topped the list with 0.8 points, India scored 0.492 points. Indexing was done on four parameters viz.; digital infrastructure, human capital and labour market policies, innovation and economic integration, and regulation and ethics. India therefore needs to scale up its digital capacity to cope with the emerging challenges.

India is working on a Digital India Bill, which will address AI regulations in detail. The IT Act of 2000 neither defines AI, nor does it address AI practices and processes. Generative AI tools cannot be allowed to undermine human rights and sovereignty and integrity of a country. The IT (Intermediary Guidelines and Digital Media Ethics Code) notified in 2021 provides for observing due diligence while discharging duties, but it lacks a forward-looking stance. The principle of ‘safe harbour’ that shields intermediaries needs a shift in online accountability standards. India needs a robust legislation that could proactively address challenges posed by the digital revolution of the 21st century.

(Exclusive to NatStrat)


     

Related Articles

Great Power Rivalry in Space
Technology & Economy Jun 10, 2024

Great Power Rivalry in Space