• Home
  • Services
    • Cybersecurity
      • Cybersecurity Awareness Month
        • Cybersecurity Q&A with CISO Avery Moore
        • The cybersecurity work no one talks about but everyone depends on
    • ICAM
    • System & Application Development
    • IT Support Services
    • Low-Code Development
      • Grants Management Solutions
    • Consulting & Advisory Services
  • About
    • About JSL
    • Contract Vehicles
    • Resources
    • JSL Companies
    • JSL Defense
    • Giving Back
  • Case Studies
    • Case Study: Securing Millions of Accounts with MFA
    • Case Study: Modernizing Labor’s Job Corps System
  • Clients
  • News
    • Press Releases
    • Cybersecurity Blog
    • Cybersecurity Awareness Month
  • Careers
  • Contact Us

Jazz Solutions, Inc. (JSL)  

  • solutions@jazzsol.com
Connect with JSL
  • Home
  • Services

      Cybersecurity

      JSL’s security team applies a comprehensive view of IT security, integrating assessment, audit, and compliance.

      Learn More

      ICAM

      JSL provides customers with  reliable, secure solutions across multiple ICAM technologies to protect systems and data.

      Learn More

      System & Application Development

      JSL’s Agile process emphasizes collaboration, hands-on demos of functionality, and usable software with each cycle.

      Learn More

      IT Support Services

      JSL’s IT support services allow our clients to focus on their core competencies, improve operational efficiency, and reduce costs.

      Learn More

      Low-Code Development

      JSL offers low-code, full-custom development, and hybrid solutions, focusing on immediate needs as well as long-term success.

      Learn More

      Consulting & Advisory Services

      JSL helps government agencies improve efficiency, streamline processes, and manage resources.

      Learn More

    • Cybersecurity
      • Cybersecurity Awareness Month
        • Cybersecurity Q&A with CISO Avery Moore
        • The cybersecurity work no one talks about but everyone depends on
    • ICAM
    • System & Application Development
    • IT Support Services
    • Low-Code Development
      • Grants Management Solutions
    • Consulting & Advisory Services
  • About
    • About JSL
    • Contract Vehicles
    • Resources
    • JSL Companies
    • JSL Defense
    • Giving Back
  • Case Studies
    • Case Study: Securing Millions of Accounts with MFA
    • Case Study: Modernizing Labor’s Job Corps System
  • Clients
  • News
    • Press Releases
    • Cybersecurity Blog
    • Cybersecurity Awareness Month
  • Careers
  • Contact Us
Linkedin
  • Home
  • Services

      Cybersecurity

      JSL’s security team applies a comprehensive view of IT security, integrating assessment, audit, and compliance.

      Learn More

      ICAM

      JSL provides customers with  reliable, secure solutions across multiple ICAM technologies to protect systems and data.

      Learn More

      System & Application Development

      JSL’s Agile process emphasizes collaboration, hands-on demos of functionality, and usable software with each cycle.

      Learn More

      IT Support Services

      JSL’s IT support services allow our clients to focus on their core competencies, improve operational efficiency, and reduce costs.

      Learn More

      Low-Code Development

      JSL offers low-code, full-custom development, and hybrid solutions, focusing on immediate needs as well as long-term success.

      Learn More

      Consulting & Advisory Services

      JSL helps government agencies improve efficiency, streamline processes, and manage resources.

      Learn More

    • Cybersecurity
      • Cybersecurity Awareness Month
        • Cybersecurity Q&A with CISO Avery Moore
        • The cybersecurity work no one talks about but everyone depends on
    • ICAM
    • System & Application Development
    • IT Support Services
    • Low-Code Development
      • Grants Management Solutions
    • Consulting & Advisory Services
  • About
    • About JSL
    • Contract Vehicles
    • Resources
    • JSL Companies
    • JSL Defense
    • Giving Back
  • Case Studies
    • Case Study: Securing Millions of Accounts with MFA
    • Case Study: Modernizing Labor’s Job Corps System
  • Clients
  • News
    • Press Releases
    • Cybersecurity Blog
    • Cybersecurity Awareness Month
  • Careers
  • Contact Us
Linkedin
Cyber in 60

Injection attacks reimagined: How LLMs are the new target

By JSL Staff 

The Large Language Model (LLM) you might soon have connected to your inbox could be introducing more risk than reward into your daily email scroll. In the “good old days” (and by that I mean literally yesterday), attackers exploited websites with sneaky tricks like SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). The formula was simple: take some untrusted input, sneak in malicious instructions, and—voilà—the website or browser would happily run the attacker’s code as if it were your own. Click the wrong link and suddenly your banking login or sensitive data was up for grabs. 

The thing is, those attacks never really went away. And now, with the rapid spread of AI, we’re watching history repeat itself—but this time, the target isn’t your browser. It’s the shiny new LLM quietly connected to your email, calendar, and documents. 

Large language models (LLMs) are being embedded into everyday services — including email — and that creates a new class of attacks that looks strikingly similar to old web injection and session-abuse techniques. In classic SQL injection and cross-site scripting (XSS), attackers hide executable instructions inside otherwise-innocent inputs so a privileged component (a database or a user’s browser) runs them. In cross-site request forgery (CSRF), attackers leverage an already-authenticated session to perform actions the user never intended. Modern prompt-injection and agent attacks do the same thing: specially crafted emails or documents contain natural-language instructions that an LLM or an automated agent treats as commands. Because those agents often have privileged connections (email, contacts, cloud drives, API tokens), they can exfiltrate data or perform actions without the human ever reading or consenting. Recent research and reporting demonstrate proof-of-concept exfiltration and agent attacks, underscoring why organizations should treat LLM connectors like any other sensitive integration. 

Sidebar: Enterprise Controls for LLM-Connected Email 

(A quick-hit checklist for security & risk teams) 

  • Least privilege: Limit OAuth scopes and remove “all mail” or “export contacts” access unless absolutely necessary. 
  • No auto-execute: Block agents from automatically carrying out commands embedded in untrusted content. 
  • Human-in-the-loop: Require explicit user confirmation for risky actions (e.g., sending mail, exporting contacts, sharing docs). 
  • Logging & audit: Record all agent-initiated actions, including raw inputs, for at least X days. 
  • Adversarial testing: Continuously fuzz agents with known prompt-injection payloads before production use. 
  • Anomaly detection: Monitor for mass data exports or unusual outbound requests from agent accounts. 

Final Thoughts: Déjà Vu, But With AI 

If all this sounds familiar, it should. We’ve been here before. SQL injection, XSS, and CSRF taught us that whenever software blindly trusts input, bad things happen. Now, LLMs are walking down the same path—except instead of parsing web forms or browser cookies, they’re parsing our emails, calendars, and business data. And unlike the early web, these tools don’t just display information—they can act on it. That makes them powerful, but also dangerous. 

Organizations can’t afford to treat this as a “future problem.” The attacks are already here, and they’re only going to get more convincing. The decision to connect an LLM to email or documents should always come with a risk assessment, guardrails, and monitoring in place. 

Because at the end of the day, this isn’t just about stopping hackers—it’s about making sure the AI sitting in your inbox works for you, not against you. 

Sources:  

  • Ars Technica — “New attack on ChatGPT research agent pilfers secrets from Gmail inboxes” (report of agent-based Gmail exfiltration). Ars Technica 
  • OWASP — XSS and CSRF primer (classic web injection/session-abuse definitions). OWASP+1 
  • OWASP GenAI — “Prompt Injection” entry and mitigations. OWASP Gen AI Security Project 
  • Wired — “Imprompter” research showing prompt techniques that extract personal data. WIRED 
  • arXiv / academic work on agent/LLM attack surfaces (systematic agent exploits). arXiv 
  • CSO Online – “Meet ShadowLeak: ‘Impossible to detect’ data theft using AI” (https://www.csoonline.com/article/4059606/meet-shadowleak-impossible-to-detect-data-theft-using-ai.html) 

How AI is transforming both cybersecurity and cybercrime
Previous Article