LLMs and New Cyberthreats

Cybersecurity
5min read
Author

Igor Sadoune

Published

June 25, 2025

Large Language Models (LLMs) are making their way into organizations where they serve as engines fine tuned on private documentation and data to power agentic workers and therefore increase productivity. LLMs support production but also can considerably weight in decision-making as they are often (wrongly) used as oracles by staff. More recently, as discussed in a previous article of this blog , AI systems have been integrated into governments internal worflow, data management, and decision-making, exapanding LLMs control beyond the corporate world. This allows to spread the strength of AI over several layers of the society. However, if AI has strentgh–mainly productivity boost if used properly–it has also vulnerabilities. Indeed, on top of the usual limitations, fears, and concerns, new cybersecurity threats arise. Cybersecurity threats are not inherently new, but using LLMs as central engine creates new opportunities for attackers, such as prompt manipulation, data poisoning and model extraction.

Conventional versus LLM-Specific Cybersecurity

The emergence of LLMs introduces unique security challenges that differ fundamentally from traditional cybersecurity concerns. Those LLM-specific risks and associated counter measures do not compete against the existing framework but compements it:

Dimension Traditional Cybersecurity LLM-Specific Cybersecurity
Attack Surface System intrusion required Remote manipulation possible
Data Type Structured (databases, files) Unstructured (text, conversations)
Primary Risks Malware, unauthorized access Prompt injection, data poisoning
System Nature Static, predictable Dynamic, context-dependent
Compliance ISO 27001, GDPR AI Act, ethical guidelines
Defense Firewalls, encryption Input sanitization, output filtering
Human Factor Password weakness, phishing Overreliance, prompt crafting

LLM-Specific Threats

Attack Vectors

The cybersecurity landscape transforms dramatically when LLMs enter the picture. While traditional attacks like malware and phishing require attackers to breach network perimeters or exploit human vulnerabilities, LLM-specific threats operate through entirely different channels.

Consider data poisoning, where attackers don’t need to hack into your systems at all. Instead, they corrupt the training data that shapes the model’s behavior, creating a time bomb that activates when the model encounters specific triggers. Similarly, model extraction attacks use carefully crafted queries to reverse-engineer proprietary models, essentially stealing intellectual property through the front door. Prompt injection represents perhaps the most accessible attack vector—malicious users can manipulate outputs simply by crafting clever inputs, turning the model’s conversational nature against itself.

Data Protection

The nature of data protection also shifts fundamentally. Traditional cybersecurity focuses on securing structured data—neat rows in databases, organized financial records, and clearly defined files. But LLMs work with messy, unstructured conversations and text, creating new vulnerabilities. These models can inadvertently memorize and expose sensitive training data, leaking confidential information through seemingly innocent responses. The challenge isn’t just protecting data at rest or in transit, but controlling what the model has learned and might reveal.

Human-Centric Risks

Beyond data concerns, LLMs introduce risks that don’t exist in traditional systems. Bias and fairness become security issues when models produce discriminatory outputs that could lead to legal liability or reputational damage. Hallucinations—the model’s tendency to generate convincing but false information—pose risks ranging from misinformation spread to flawed business decisions. These aren’t bugs in the traditional sense; they’re inherent characteristics of how these models work.

The dynamic nature of LLMs creates additional complexity. Unlike static servers with fixed configurations, LLMs adapt their responses based on context and user interaction. Each conversation potentially changes how the model behaves, making it harder to predict and control outputs. This interactivity, while powerful for productivity, opens doors for sophisticated manipulation techniques.

Regulatory Compliance

Regulatory compliance adds another layer of challenge. While traditional systems can rely on established frameworks like ISO 27001 or GDPR, LLMs must navigate emerging AI-specific regulations like the EU AI Act. These new rules demand transparency, accountability, and fairness—concepts that are straightforward for traditional systems but complex for probabilistic models that even their creators don’t fully understand.

Defending Against LLM-Specific Threats

Defending against these threats requires rethinking security strategies. Traditional tools like firewalls and encryption remain important but insufficient. Organizations must implement input sanitization to filter malicious prompts before they reach the model, output monitoring to catch harmful or sensitive responses before users see them, and continuous model fine-tuning to patch vulnerabilities as they’re discovered.

Perhaps most critically, the human element takes on new dimensions. While traditional cybersecurity worries about weak passwords and phishing victims, LLM security must address how users interact with AI. This includes preventing intentional misuse by malicious actors and protecting well-meaning users from overrelying on AI outputs. When employees treat LLMs as infallible oracles rather than sophisticated but fallible tools, they create vulnerabilities that no technical solution can fully address.

To Cite This Article

@misc{SadouneBlog202509,
  title = {LLMs and New Cyberthreats},
  author = {Igor Sadoune},
  year = {2025},
  url = {https://sadoune.me/posts/llm_cybersecurity/}
}