Sahishnu Poudyal
Nowadays, when something goes wrong, people quickly point fingers at AI. But the deeper issue is, was it really AI or was AI used as an excuse to avoid human accountability and responsibility? AI skilling needs three intersecting domains: core technical skills (computer vision and machine learning), applied skills (ethics, data governance, and regulatory understanding), and multidisciplinary expertise (domain expertise in human-centred design). Have we checked if we are all well-equipped with these skills?
The recent controversy involving Ashika Tamang, a member of the Parliament, who claimed that a viral video of her dancing while carrying a copy of the constitution was made using AI. People quickly rushed to criticize AI without examining the context or understanding the role humans play in creating and distributing content.
This is where the human-in-the-loop (HITL) concept becomes crucial. The HITL emphasises that AI is not autonomous. It works in collaboration with humans. Human design, train, deploy and supervise an AI system. When errors occur, it is often not the AI that has failed, but the system of humans responsible for guiding, monitoring, and validating AI outputs. In the case in question, whether or not the video was AI-generated, the controversy highlights how humans interpret, share, and react to AI-created content. The real failure is in the oversight, context, and responsible usage, not the technology itself.
Similarly, the former prime minister Sher Bahadur Deuba has claimed that a video purportedly showing a large amount of cash at his Budhanilkantha residence is likely an AI-generated fabrication during the Gen Z movement of September 8 and 9, 2025. Likewise, in a recent Parliament session, a CPN-UML Party leader, Ram Bahadur Thapa, attributed the election outcomes to AI and algorithms, reflecting misconceptions about AI’s autonomous power.
Changing the narrative around AI mistakes is essential. Instead of saying 'AI failed', we should acknowledge the human responsibility behind the system. Mistakes can arise from biased data, misuse, lack of supervision, or overreliance on automated outputs. By adopting a human-in-the-loop approach, we ensure accountability, ethical usage, and better outcomes.
This perspective is particularly relevant in Nepal, where AI is emerging as one of the most transformative technologies of the 21st century, promising to revolutionise healthcare, education, agriculture, governance and media. AI presents both opportunities and challenges. However, a growing tendency to blame AI for errors or societal issues risks undermining accountability and distorting public perception of innovation.
Misunderstandings and misplaced blame can discourage innovation and responsible adoption if the narrative is not corrected. AI is a tool, not a scapegoat. As the Ashika Tamang example shows, we must focus on how humans use, supervise and contextualise AI systems. By changing the narrative from AI making mistakes to human-in-the-loop systems requiring accountability, we empower society to use AI responsibly and ethically.
The future of AI in Nepal and globally depends not on fearing mistakes but on embracing responsible human oversight. What we need to understand is that AI doesn’t operate independently; humans teach it, tune it, and decide how to use it. So when something goes wrong, the lesson shouldn’t be AI is at fault, but rather we need better human governance, ethics, and responsibility in AI use.