AI is revolutionising our world - but insecure LLM integrations are an easy target for hackers. Find out how you can protect yourself effectively.
While organisations are rushing to deploy Large Language Models (LLMs) in their applications, attackers are often even faster. In this live webinar, we will share our experiences from penetration testing our customers' LLM deployments and discuss basic LLM security testing methods.
We will introduce a ground-breaking technique for prompt extraction - including a live demonstration - that turns previous security assumptions about LLMs on their head. You'll see how real attackers can reveal sensitive information based on model output alone - despite established security measures. Based on the latest research, we'll show why current tools can't keep up, how these methods were discovered and what you can do to stay one step ahead of attackers.
Information about the webinar
Agenda
- Welcome & Introduction
- Findings from pentests with LLMs
- Methods & approach to testing
- Latest research results
- Live demo
- Questions & Answers (Q&A)
Target group
- Security leads and security teams
- AI/ML engineers
- AppSec specialists
- DevOps engineers
- CISOs
- Those responsible for product security
- Technical decision makers who develop or protect LLM-based systems
What you can expect in the webinar
Insights:
Security risks we discover in real LLM deployments - straight from Claranet pentesters
Expertise:
How to perform security tests of LLMs compared to conventional penetration tests
Research findings:
What the latest research reveals about vulnerabilities in generative AI
Live demo
How system prompts can be extracted from model outputs - including a live demo of a new attack
➡ Ideal for anyone who wants to make their LLM integrations secure and future-proof.
Speaker
Would you like to delve deeper?
This webinar provides an initial insight into the content of our two-day Mastering LLM Integration Security: Offensive & Defensive Tactics training course. You will learn hands-on how to secure LLM-based systems holistically and how to recognise and defend against attacks independently.
