JSL’s security team applies a comprehensive view of IT security, integrating assessment, audit, and compliance.
The Large Language Model (LLM) you might soon have connected to your inbox could be introducing more risk than reward into your daily email scroll. In the “good old days” (and by that I mean literally yesterday), attackers exploited websites with sneaky tricks like SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). The formula was simple: take some untrusted input, sneak in malicious instructions, and—voilà—the website or browser would happily run the attacker’s code as if it were your own. Click the wrong link and suddenly your banking login or sensitive data was up for grabs.
The thing is, those attacks never really went away. And now, with the rapid spread of AI, we’re watching history repeat itself—but this time, the target isn’t your browser. It’s the shiny new LLM quietly connected to your email, calendar, and documents.
Large language models (LLMs) are being embedded into everyday services — including email — and that creates a new class of attacks that looks strikingly similar to old web injection and session-abuse techniques. In classic SQL injection and cross-site scripting (XSS), attackers hide executable instructions inside otherwise-innocent inputs so a privileged component (a database or a user’s browser) runs them. In cross-site request forgery (CSRF), attackers leverage an already-authenticated session to perform actions the user never intended. Modern prompt-injection and agent attacks do the same thing: specially crafted emails or documents contain natural-language instructions that an LLM or an automated agent treats as commands. Because those agents often have privileged connections (email, contacts, cloud drives, API tokens), they can exfiltrate data or perform actions without the human ever reading or consenting. Recent research and reporting demonstrate proof-of-concept exfiltration and agent attacks, underscoring why organizations should treat LLM connectors like any other sensitive integration.
Sidebar: Enterprise Controls for LLM-Connected Email
(A quick-hit checklist for security & risk teams)
Final Thoughts: Déjà Vu, But With AI
If all this sounds familiar, it should. We’ve been here before. SQL injection, XSS, and CSRF taught us that whenever software blindly trusts input, bad things happen. Now, LLMs are walking down the same path—except instead of parsing web forms or browser cookies, they’re parsing our emails, calendars, and business data. And unlike the early web, these tools don’t just display information—they can act on it. That makes them powerful, but also dangerous.
Organizations can’t afford to treat this as a “future problem.” The attacks are already here, and they’re only going to get more convincing. The decision to connect an LLM to email or documents should always come with a risk assessment, guardrails, and monitoring in place.
Because at the end of the day, this isn’t just about stopping hackers—it’s about making sure the AI sitting in your inbox works for you, not against you.
Sources: