Artificial intelligence (AI) has become an integral part of our lives, providing opportunities for efficiency, innovation, and decision-making. It has not only brought efficiency in our performance and decision-making but also poses unprecedented threats to our democracy, human rights and governance. Notably, it has been influencing the ways that we communicate, negatively impacting public discourse, policy discussion and election campaigns, highlighting concerns about fundamental rights relating to privacy, misinformation and corporate powers. If it remained unchecked, it could violate the citizens' rights enshrined in the constitution, moreover, it shifts control over the rights and actions from government to powerful corporations.
The government of Nepal has introduce the Bill relating to social media which proposes strict penalties for spreading misinformation through fake accounts using AI bots, Meanwhile, the Electronic Transaction Act, 2063(ETA) is in force to regulate digital based activities that failed to define cybercrimes clearly like theft, cyber stalking, fraud, and defamation, In today's world, nature of crimes has been changed due to advent of hi-tech. Social Media platforms and internet services powered by AI algorithms and bots serve as a tool in manipulating information, violating privacy and influencing democracy.
Corporate Concerns
Ownership and control of AI in the world lie predominately in the hands of the 'Fright Five' corporations that include Google, Facebook, Amazon, Apple, and Microsoft. These mega five dominate control over AI and its advancement. It is not only about the control and ownership of AI development but also holds AI research, knowledge, information, data, online discourse, prioritising profit over public interest. The most interesting part of social media powered by AI is that it determines what people see online, shaping opinions and even influencing and manipulating political outcomes.
Furthermore, these giant tech corporations secure and consolidate power and dominance by acquiring smaller corporations. For example, Google's purchase of Deep Mind and Microsoft's heavy investment in open AI illustrate how AI monopolies grow. This obviously raises concerns about a level-playing field for smaller competitors ensuring fairness, competition and labour rights. As Shoshana Zuboff has rightly explained about surveillance capitalism, it suggests how big corporations use personal data for their vested interest, leveraging AI as a weapon for socio-economic exploitation through influencing and manipulating misinformation, thereby creating political influence.
Several research studies show that AI has turned social platforms into a powerful tool for shaping and governing public opinion. AI-driven social media and its algorithms personalise content, manipulate users' beliefs, fuel political polarisation and circulate and spread misinformation. The 2018 Cambridge Analytica Scandal is one of the great examples that highlights profiling manipulated voter's behaviour in the US presidential election and Brexit referendum. In 2016, Hilary Clinton fell victim to AI's disinformation, showing AI's capacity to disrupt democracy. Likewise, deepfake technology enables a fortiori impersonation that mimics real individuals to deceive the public.
Today, mass surveillance is one of the most pressing ethical issues worldwide. The social media tracks online users' activities, analysing and manipulating personal data to predict and influence people's behavior. It is not only online users who are being affected. Even non-social media users are also being monitored, which reflects the most alarming situation of the advancement of AI, demonstrating AI's reach beyond active participants. Furthermore, facial recognition and predictive policing enabled by AI-driven social media that exacerbate privacy violations.
The hi-tech corporations and governments collect vast amounts of data and information without the consent of the concerned individuals as per laws. In this connection, the EU has introduced the General Data Protection Regulation (GDPR), which is one of the few attempts to regulate AI surveillance, but enforcement of such regulations remains challenging, underscoring the need for stronger legal frameworks to be enacted by the parliament.
A mix of libertarianism and technological utopianism concept known as the California Ideology forwarded by tech companies advocate self-regulation rather than government's oversight, For instance, Declaration of the Independence of Cyberspace, 1996 advanced by John Perry Barlow argued that government should have no control over the technological governance, allowing corporations to exploit AI for profit, influence and surveillance in the name of AI self-regulations. For this, tech firms have established AI ethics committees, but these lack accountability. Google’s Advanced Technology External Advisory Council (ATEAC), for instance, faced criticism over its composition and independence, leading to its dissolution. Thus, self-regulation alone is insufficient where parliamentary oversight is essential.
Legal gap
In Nepal, there is a legal gap in regulating the digital and AI governance framework. While Nepal's constitution protects and promotes the rights to privacy and freedoms, social media powered by AI undermines these rights. The prevailing ETA fails to define cybercrimes, making it difficult to prosecute AI-related offenses. In this scenario, laws are highly required to address the negative impacts of AI-powered digital governance, particularly impacting democracy and the rule of law. Thus, the regulation of AI is highly required to maintain its transparency for data protection and its accountability mechanisms. In this regard, Nepal's Social Media Bill tabled in parliament must debate aligning with the impacts on democracy and rule of law, focusing on AI-driven content moderation, digital security and AI's ethical governance.
Finally, sooner than later, the government must act by enacting a stronger legal framework for regulating social media as driven by AI that prevents its monopolies and promotes and ensures its transparency in AI decision-making. The discussion and deliberation on the Social Media Bill is a significant opportunity to establish legal safeguards against AI's threats to democracy and the rule of law. However, this Bill should not only address cybercrimes but also consider AI's broader impact in mind, focusing on governance, privacy, human rights and public trust.
(The author is an advocate and development practitioner.)