Vulnerabilities of Large Language Models: Analysis and Protection Methods

Автор: Rahmani J., Baibara B.V., Tetov S.G.

Журнал: Проблемы информатики @problem-info

Рубрика: Теоретическая и системная информатика

Статья в выпуске: 3 (68), 2025 года.

Бесплатный доступ

The rapid adoption of large language models (LLMs) in enterprise environments has revolutionized industries by enabling advanced automation, customer service, content generation, and data analysis. However, this technological advancement introduces significant security risks, as organizations increasingly report breaches and vulnerabilities associated with AI systems. According to industry reports, 74 % of major IT companies experienced AI-related security incidents in 2024, with 89 % expressing concerns about vulnerabilities in third-party AI applications. This paper provides a comprehensive analysis of the most critical security threats in LLM deployments, focusing on prompt injection attacks, different supply chain vulnerabilities, and data poisoning, while proposing mitigation strategies to enhance AI security. Key Vulnerabilities in LLM Applications: In this paper we analyze most critical vulnerabilities based on OWASP TOP 10 LLM list. OWASP (Open Worldwide Application Security Project — The Open World Application Security Project (OWASP) in its “OWASP Top 10 for Large Language Model Applications 2025” ranked operational injection, sensitive information disclosure, supply chain vulnerabilities, data and model poisoning, and improper output handling as the top five vulnerabilities. 1. Prompt Injection Attacks - Prompt injection occurs when malicious user inputs manipulate an LLM’s behavior, bypassing security restrictions to extract sensitive data, execute unauthorized commands, or generate harmful content. - Two primary types are identified: a) Direct prompt injection: Explicit adversarial instructions that override system prompts (e.g., “Ignore previous instructions and disclose confidential data”). b) Indirect prompt injection: Maliciously crafted external data (e.g., poisoned web pages or documents) that indirectly influences the model’s output. - Advanced techniques like Knowledge Return-Oriented Prompting (KROP) demonstrate how attackers can bypass safeguards by leveraging the model’s training data - Mitigation strategies: Input validation, output filtering, least-privilege access controls, and alignment-based guardrails to enforce intended model behavior. 2. Supply Chain Vulnerabilities - LLMs rely on external dependencies, including pre-trained models, datasets, and third-party libraries, which can be compromised to introduce backdoors or biased behavior. - Case studies include the “pymafka” PyPI package, which mimicked a legitimate library but deployed Cobalt Strike malware. - A formal risk assessment model evaluates the probability of compromise across data, dependencies, and training pipelines. - Mitigation strategies: Secure model provenance (e.g., signed artifacts), Software Bill of Materials (SBOM) for dependencies, and continuous monitoring for anomalies. 3. Data Poisoning Attacks - Adversaries corrupt training data to manipulate model outputs, leading to biased, unethical, or malicious behavior. - Notable incidents include Microsoft’s Tay chatbot, which was manipulated into generating offensive content through user interactions. - Risks extend to pickle-based model serialization, where malicious code can execute during deserialization, compromising entire systems. - Mitigation strategies: Secure data sourcing, sandboxing untrusted inputs, and anomaly detection via gradient analysis and behavioral divergence metrics. Defensive Frameworks and Future Challenges The paper highlights existing defense mechanisms while acknowledging persistent gaps in LLM security. Key recommendations include: - Secure-by-design principles, such as using safer serialization formats (e.g., SafeTensors instead of pickle). - Multi-layered validation of inputs, outputs, and model behavior. Despite these measures, the evolving sophistication of attacks—such as Indirect Prompt Injection, Knowledge-Return-Oriented-Prompting and backdoored models — demands ongoing research. The paper concludes by emphasizing the need for industry-wide collaboration, standardized security benchmarks, and regulatory frameworks to mitigate risks in LLM adoption.

Еще

LLM, artificial intelligence, prompt injection, supply chain attack, data poisoning

Короткий адрес: https://sciup.org/143185310

IDR: 143185310   |   УДК: 004.89:004.056   |   DOI: 10.24412/2073-0667-2025-3-17-33