Challenges Of AI In Cybersecurity

blog

Artificial Intelligence (AI), long a subject of academic research, is now unfolding in a realm of experimental opportunities, hopes, and unexplored potentials. The recent surge in AI's momentum is attributable to various factors: advancements in technology, including cloud computing, bigdata and innovative software development practices; financial aspects, such as the reduced cost of technology cost and human capital; and an increased pool of talent with growing number of PhDs across multiple interdisciplinary areas. This rise in AI's prominence is further fueled by its democratisation, a convergence of enhanced human expertise, cloud infrastructure accessibility, and the widespread adoption of digital data and systems across all the continents. 

Despite its longstanding presence in academia, AI's real-world applications have seen more pronounced developments and public interest in the past decade, albeit with a mix of significant hype and limited yet impactful implementations in the industry. The cybersecurity challenges posed by AI are multifaceted, extending beyond technological hurdles to encompass ethical, legal, and societal dimensions. The cybersecurity challenges are particularly pronounced in areas like regulation, international engagement rules—a longstanding issue in cybersecurity—and principles of freedom and democracy, and free speech. While the intricacies of AI intersect not only with cybersecurity, especially in terms of data integrity and validity throughout its lifecycle, this article will focus predominantly on the cybersecurity aspects, acknowledging the interconnected yet distinct nature of issues in trust, privacy and safety.

AI models

Language processing generative AI models, such as ChatGPT and BARD, have recently come under close examination. For the first time, the US Senate conducted a hearing where members interacted with an AI model. The spotlight is on generative AI, a branch of machine learning that employs AI to generate new content, including text, images, music, audio, videos, 3D models, and synthetic data. Examples of generative AI applications include chatbots handling customer queries in call centers, learning and relating features, generating images of objects not present in the training dataset, and emulating unique artistic styles to create new visuals. While these represent current advancements, other AI domains remain largely unexplored at scale. 

The implications are significant, particularly in customer service roles or other jobs involving direct human interaction. A critical concern arises when individuals seek information on urgent matters, such as natural disasters or safety issues, and the AI model lacks the necessary training to respond appropriately. While cybersecurity is agnostic to any type of digital applications, AI applications present unique challenges in terms of attack surface and potential impact. The Cybersecurity and Infrastructure Security Agency (CISA) in collaboration with the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) emphasises several AI-specific cybersecurity threats:

Data Poisoning: This adversarial attack involves deliberately corrupting data to degrade the performance of AI systems.

Input Manipulation Attacks: These involve real-time alterations to the inputs of an AI system, such as sensor readings, settings, or user inputs. The goal is to influence the AI's response or actions, potentially leading to system failures or erroneous decisions. 

Generative AI Hallucinations: This occurs when a generative AI, like a large language model, generates inaccurate or misleading outputs, mistaking nonexistent patterns or objects as real.

Privacy and Intellectual Property Risks: Users of AI tools should be aware of the potential infringement of intellectual property rights, especially if the AI has been trained on protected data. Furthermore, AI solutions tailored to individuals can raise significant privacy concerns.

Model Stealing and Training Data Exfiltration: Here, attackers replicate an AI model by exfiltrating its input and output data. This long standing cybersecurity issue also involves data loss prevention methods to mitigate data leakage and exfiltration.

Re-identification of Anonymised Data: This threat involves identifying individuals in supposedly anonymised datasets, leading to privacy breaches and potential misuse of sensitive information. Ensuring data de-identification requires not only removing personal identifiers but also considering the context and re-identification risks.

Let's delve into aligning these threats with the CIA (confidentiality, integrity, and availability) triad of cybersecurity:

Confidentiality: This aspect is critically concerned with safeguarding intellectual property, Personal Health Information (PHI), and Personally Identifiable Information (PII). The ramifications of data leaks and exfiltration are significant, emphasising the need for stringent measures to maintain data confidentiality.

Integrity: Central to this is the issue of data poisoning. A closer examination of the highlighted threats reveals their core relation to data integrity, amplified by the extensive datasets utilised in ML models. This expansion not only increases the attack surface but also adds complexity to the architecture, making end-to-end security a demanding task. It involves ensuring a secure data supply chain and periodic integrity checks at various development, deployment and operation stages. Compromised data integrity poses substantial risks to privacy and intellectual property.

Availability: The sheer size and scale of ML/AI models introduce distinctive challenges regarding availability and application scope. For instance, Distributed Denial of Service (DDOS) attacks have a broader impact on these systems compared to conventional applications. Furthermore, the Recovery Time Objective (RTO) for AI/ML systems tends to be longer, complicating cybersecurity incident investigations and potentially affecting service availability post-incident.

Persistent challenge

Cybersecurity, inherently complex, remains a persistent challenge and is likely to continue as such. This complexity takes on additional dimensions in AI applications, affecting not just the scale and breadth of attack surfaces but also the underlying architecture. Crucially, it influences perspectives on public policy, national security, and concepts of trust, safety, and privacy, topics that will be explored in future articles.

 (The author is a cybersecurity expert. ravi.dhungel@gmail.com.)


How did you feel after reading this news?

More from Author

Nurturing The Virtues Of Social Cohesion

Navigating Nepal's Transition From LDC Status

Accept Your Emotions

More than 80 students crammed in single class

216,630 watch films in Cine Royal hall in Nepalgunj

Average stay of tourists in Gandaki is 12 days