Human Rights And AI

blog

"I have more faith in a statistical mechanism than a grieving soldier." This statement from an Israeli intelligence officer highlights how modern warfare is being shaped by artificial intelligence (AI). In the ongoing conflict between Israel and Palestine, Israel has been widely using an AI system called 'Lavender' to target individuals in Gaza. Various reports suggest that Lavender has been used to kill civilians in search of low-ranking militants.

 In early military operations, identifying and producing human targets was a more labour-intensive process. However, with the advancement of AI, this process has become more automated, especially in Israel, which relied more on Lavender to generate databases of individuals judged to have the characteristics of a PIJ or Hamas militant. Alongside, another AI-based decision support system called 'the Gospel' was used to recommend structures and buildings as targets. Similarly, in the Russia-Ukraine war, AI has been used to geolocation and analyse open-source data like the content of social media to identify soldiers, weapons, units, and even their movements.

Pros and cons of AI

As new technologies advance, the evolution of international humanitarian law and the framework governing human rights continues in tandem. AI has significantly reformed the landscape of human rights, presenting both opportunities and challenges. Its benefit has been substantial to humanity. It provided crucial insights and issued alerts early in the COVID-19 pandemic. It holds considerable promise for positively impacting the lives of persons with disabilities by addressing obstacles and delivering innovative solutions. From vision-enhancing tools to speech-to-text applications, AI has the potential to break barriers and provide new avenues for interaction and communication. In an inclusive digital environment, if promoted widely, AI-driven features can make technology more user-friendly. AI's capabilities extend to democratising access to knowledge, processing large volumes of information, unlocking medical breakthroughs, predicting natural disasters, and contributing to efforts to combat global warming. 

While AI and emerging technologies have promising prospective to advance human rights, they also pose significant risks of violating these rights. The right to privacy and the right against discrimination are particularly vulnerable in the field of AI. For example, AI-generated deep fakes (AI-used fake replicas of the real image) used to create fabricated videos, audio recordings, and misleading content pose a direct threat to these fundamental rights. Beyond privacy and discrimination, AI systems can undermine political processes, spread false information, and damage reputations. Many governments use AI for surveillance systems where facial recognition technology is being employed to monitor and control the movement of citizens, particularly targeting ethnic minorities and rebel groups. AI generated surveillance systems can also be used to suppress dissent and monitor political activities, infringing on privacy and freedom of expression.

Additionally, an AI system is deployed to monitor and score citizens based on their financial habits, social interactions, behaviour, and compliance with laws. Those citizens who score lower can face restrictions on travel, employment, and even the credit system. This system raises concerns about privacy, freedom of movement, and the potential for excessive state control over personal lives. There have been instances where automated decisions have resulted in wrongful deportations, affecting individuals' rights to fair treatment and due process due to lack of transparency. In the legal sector, AI is being used in the criminal justice system to predict future criminal behaviour, which has already been found to reinforce discrimination and undermine fundamental rights of justice, including the presumption of innocence.

Global efforts on AI

These incidents highlight the burning need for human rights impact assessments in AI regulations to prevent potential harm while also harnessing the technology’s benefit. Globally, some efforts have been made to address AI's impact. Over 37 countries have formulated AI-related legal frameworks intended to address public concerns for AI safety and governance. For instance, the European Union's (EU) landmark AI Act, officially passed on 13 March 2024, includes two fundamental premises: AI should be regulated based on its capacity to cause harm and linking AI systems with requirements of transparency. In the United States, the blueprint for an AI Bill of Rights has been adopted, which addresses mainly two human rights principles: the right to receive notice/justification of algorithmic decisions impacting individuals' lives and the right to be protected from unsafe or ineffective AI systems. However, many of these policies lack a strong focus on human rights. 

AI and human rights 

As per the Government AI Readiness Index 2023, Nepal is far behind at the position of 150 out of 193 countries. The overall South Asian region relatively lags behind in AI regulation. In contrast, India, our immediate neighbour and a regional leader in AI, has made significant strides. India's first edition of the 'IndiaAI 2023' initiative, drafted by an expert group for the Ministry of Electronics and Information Technology, places the idea of 'AI for All'. It provides a roadmap for integration of AI into the structure of India's governance, data management, privacy rights, and strategic partnerships to foster innovation and technological advancements while also recognising the intersections of AI and human rights.

Nepal's AI sector, however, remains unchecked with a notable gap in necessary laws and regulations. One commendable effort by the government is the development of its first-ever concept paper on AI, which lays the foundation for formulating essential policies and legal frameworks. It is crucial to understand the intersection of AI and human rights while formulating these laws, placing human rights at the core of the regulatory framework. From the lens of human rights, every AI-generated action is akin to opening Pandora's box, with the potential to both protect and infringe upon human rights. The vast and largely unknown potentiality of AI, if not carefully regulated from a human rights perspective, can easily undermine rights guaranteed by the Constitution of Nepal and legal frameworks along with international human rights instruments in ways that are hard to anticipate.

Regulations must be developed urgently, especially in developing countries like Nepal, where the information regarding the use and impact of AI is not known to its majority population. These regulations should carefully assess human rights principles before, during, and after the deployment of AI technologies. Those AI technologies non-complaining with international human rights standards must be banned or suspended. For effective AI governance linked with human rights, Nepal needs to ensure transparency in systems using AI and establish a competent oversight agency along with accessible remedies. Policymakers must seriously consider the impact of these systems on fundamental rights and freedoms, with special attention to areas with heightened risks of abuse, such as law enforcement, justice, digital equality, financial services, social justice, and social protection.

The national and international human rights framework addressing the cross-cutting sector of AI and human rights is the need of the hour. Between the grey line of deteriorating or saving humanity, AI is likely to take any path, and probably the bad one if left unregulated. As in his poem "The Sorcerer's Apprentice," Goethe illustrates what happens when someone lets the genie out of the bottle. The poem ends with the lesson that 'one should only invoke the magic when they actually learn to master it'. So, what could be the instrument that helps us in mastery of AI? We can argue that it is human rights, which can serve as both mastery and solution. On one hand, they help us to navigate how AI can be used to safeguard and promote human rights, and on the other, how human rights themselves should be protected when using AI.


(The author is a Human Rights Officer at the National Human Rights Commission.)

Author

Pooja Neupane
How did you feel after reading this news?

More from Author

Investing In Clean Air For Healthy Future

Telling Stories To Transform Track Of Lives

English Version Of Selected Maithili Poems

Inevitable Karma Cycle

True Spirit Of Parliamentary Hearings

Accord Priority To Social Issues

Lumbini’s Administrative Centre sees fast progress