Artificial Intelligence (AI) is introducing a plethora of opportunities and hopes across various industries, opening up uncharted possibilities. There are unique challenges in securing AI in clinical systems and medical devices. Securing Personal Health Information (PHI) and patient safety is paramount, bringing highly regulated industries into the AI landscape where patient safety often relies on clinical AI agents. These cascading clinical agents are often loosely coupled and managed by master agents, augmenting the clinical practitioner's capabilities.
Data poisoning is a type of adversarial attack that involves intentionally contaminating data to harm the performance of machine learning (ML) systems. This type of attack can be detrimental to clinical systems and medical devices, directly impacting patient safety. Real-time input manipulation assaults modify AI system inputs. These attacks involve changing sensor readings, settings, or user inputs to manipulate the AI's responses or actions. Such assaults could cause AI-powered systems to malfunction or make poor decisions, leading to a direct clinical impact due to compromised data integrity and potentially resulting in incorrect diagnoses.
AI hallucination
An AI hallucination is when a generative AI model produces inaccurate or misleading information as if it were correct. This can happen when a large language model (LLM) perceives patterns or objects that are not real to human observers. For instance, a large language model trained on clinical and patient health information (PHI) data might generate a patient's medical history that includes conditions or treatments the patient never had, or provide incorrect dosage recommendations for medication. In a medical device, if an AI component responsible for interpreting sensor data hallucinates a dangerous malfunction that isn't occurring, it could lead to unnecessary shutdowns or incorrect alerts, potentially disrupting patient care.
Customers using AI tools based on training data should also consider this risk, as the customer may infringe the IP rights by using the AI tool if the tool uses or has used protected data as a training basis, to the extent protected works may be found in the AI tool. Profiling individuals for tailored solutions also poses a threat to privacy. In clinical systems and medical devices, privacy and intellectual property (IP) threats arise from several vulnerabilities, especially when AI tools are involved. Even if direct identifiers are removed, AI models can re-identify individuals from supposedly anonymised clinical datasets. For example, if an insurance company gains access to re-identified healthcare data, it could exploit private medical information. This poses a significant threat to patient privacy, potentially leading to discrimination or targeted marketing.
AI solutions that create detailed profiles of individuals based on their health data can infringe on privacy. This profiling can lead to concerns about how personal health information is used, shared, and potentially misused by various entities. The highly sensitive nature of Personal Health Information (PHI) and Personally Identifiable Information (PII) in clinical systems makes data leakage a severe privacy threat. If this data is exfiltrated, it can lead to widespread privacy violations for patients and regulatory fines for the business. AI tools often rely on vast amounts of training data. If this data includes protected works (e.g., patented algorithms, copyrighted medical images, or proprietary research findings), customers using the AI tool could inadvertently infringe on IP rights.
This is because the AI tool of clinical systems and medical devices might incorporate or reproduce elements of the protected health data. Malicious actors can reverse-engineer AI models used in clinical systems or medical devices by analysing their inputs and outputs. This allows them to create a "lookalike" model at a fraction of the cost, essentially stealing the intellectual property embedded in the original model's design and training. This threat can undermine the competitive advantage of companies developing innovative AI-powered medical and clinical solutions. Now, let's explore how we can map clinical AI threats with the CIA (confidentiality, integrity and availability) triad of cybersecurity.
Confidentiality is the most on intellectual property data, Personal Health Information (PHI) and Personal Identifiable Information (PII). The impact of data leakage and exfiltration will be immense if data confidentiality is not maintained. Data poisoning is an integrity issue. If you look at the above threats very closely with scrutiny, most of the threats are related to data integrity in the core but the impact is high because huge data sets are used in ML models. That means more attack surface to secure and a complicated architecture. Securing an end-to-end architecture requires a huge effort because of the complexity of the architecture. There are so many places where things can go wrong.
Integrity
Maintaining end-to-end data integrity requires a secure data supply chain and periodic validation of integrity during different development phases. Breaches in integrity can pose significant challenges to privacy and intellectual property. The size and scale of ML/AI models present a unique challenge in terms of availability and scope of clinical application and medical devices. Distributed Denial of Service (DDOS) attacks often impact patient safety.
Also, the Recovery Time Objective (RTO) will be higher, which will be critical for the clinical systems and medical devices that impact the safety of the patient. Cybersecurity has always been a challenge and it will continue to be because of the complexities of the interconnected computer systems. Thus, the challenge of securing AI applications in clinical systems and medical devices requires extra guardrails on technical, legal and safety aspects of the clinical and medical devices.
(The author is a medical cybersecurity engineer at Becton Dickinson, a US-based multinational medical technology company. ravi.dhungel@gmail.com)