Security attacks in AI/ML

Deepak Maheshwari
4 min readMar 17, 2024

--

Security is key for any industry when implementing any solution. Various types of Software applications are being developed for industries like banking, insurance, automobile, and medical and if security is not considered then it can affect everyone. With the surge in usage of AI/ML and GenAI-related technologies for automation and related solutions, there are efficiency gains and value savings. However, if these solutions are not secured then it can negatively impact the outcomes.

As per Gartner, ML adversarial attacks are starting to emerge. Some 27% of organizations that have experienced an AI privacy breach or security incident also indicated that the breach/incident involved intentional malicious attacks on the organization’s AI infrastructure.

SOURCE: KITTIPONG JIRASUKHANONT VIA ALAMY STOCK PHOTO

In this blog, I am going to discuss top security issues and preventive measures while delivering any AI/ML solution.

Model Skewing –Attackers attempt to pollute the good training data with bad data so that models can behave in their favor. This can compromise the accuracy of the model resulting in its potential harm to financial institutions.

To prevent AI models from such attacks-

· Enforce the correctness of the data by using techniques like digital signatures and reject the data that does not meet the required criteria.

· Constantly training the model and verifying the outcomes.

· Implement anomaly detection and robust access control.

Input Manipulation Attack — If you consider GenAI as an example, LLM models produce the outcomes based on the prompts. Now, if attackers maliciously or deliberately alter the input data then it can mislead the model which might result in incorrect outputs.

To prevent AI models from such attacks-

· Developers need to train the models against such adversarial examples. This will help in robust training against such attacks.

· Input validation to reject requests for such anomalies and patterns will help in the detection and prevention of input validation attacks.

Data Poisoning Attack — This attack strikes at the training phase by malicious contamination of data to compromise the performance of AI and ML systems. This is done by introducing, modifying, or deleting selected data points in a training dataset. Adversaries can induce biases, errors, or specific vulnerabilities that manifest when the compromised model makes decisions or predictions.

To prevent AI models from such attacks-

· Securely store the training data using encryption, firewalls, etc., and control the access on a need basis only.

· Separate the production and training data to reduce the risk.

· Implement auditing and anomaly detection to find any abnormal trends.

Model Inversion Attack — This attack results in reconstructing the original sensitive input data by using the model output. The attacker starts with knowledge of the model’s structure and access to its outputs. The attacker then uses optimization techniques to find an input that would produce a similar output. By iterating this process, the attacker can gradually reconstruct the original input data.

To prevent AI models from such attacks-

· Implement differential privacy which adds noise to the model output to prevent reconstruction of original data.

· Regular auditing, testing, and re-training models to protect against potential vulnerabilities.

· Limit the amount of information the model spits as output abnormal trends.

Membership Inference Attack — Attackers try to determine if you have used a particular person’s personal information to train a machine learning model. The attacker’s goal is to access the person’s personal information. Example — Suppose a model is trained to predict whether a customer will likely default on a loan based on their credit history. An attacker who does not have access to the training data for the model could use a membership inference attack to determine whether the person’s credit history was part of the training dataset. If the attack succeeds, the attacker can determine whether the person is likely to default on a loan, which is valuable information they could sell or use to harm or blackmail the person.

To prevent AI models from such attacks-

· Train models on random data to make determination difficult for attackers.

· Differential privacy techniques can be helpful here as well.

· Reduction of data correlation factor can be useful.

Model Theft- The goal of attackers is to gain access of the parameters or functionality of models. This is being done by querying the model and learning from the response.

To prevent AI models from such attacks-

· Obfuscate the model code by making reverse engineering difficult.

· Encryption and access control of training data can prevent stealing the model.

· Securing the model using legal protection such as patent/trade secret.

AI Supply Chain Attacks — Models use Open-Source packages or libraries, and this is being done when attackers modify or replace these libraries to negatively impact the model.

To prevent AI models from such attacks-

· Utilize package verification tools to validate the authenticity of libraries before using them.

· Use secure package repositories and keep packages up to date with the latest versions.

· Perform code reviews and increase awareness among developers about the attack.

Transfer Learning Attack — Attackers try to train model one task and then further finetune them on other tasks to manipulate the outcomes. One such example can be seen in a facial recognition system where attackers train the model with manipulated images to gain access to sensitive information.

To prevent AI models from such attacks-

· Utilize differential privacy techniques.

· User-trusted and secured training data set to prevent attacks.

· Model isolation can help prevent the transfer of malicious knowledge from one model to another.

While AI Technologies (GenAI, ML, OCR, LLMs, etc.) are increasingly becoming popular and sophisticated, security risks and concerns are also rising. Threat actors can utilize the same AI tools meant for human good to commit malicious acts like scams and fraud. It is essential for any organization implementing solutions/models around AI to adopt preventive measures to secure them.

--

--

Deepak Maheshwari

Technical Enthusiastic | Sr. Architect | Cloud Business Leader | Trusted Advisor | Blogger - Believes in helping business with technology to bring the values..