18 February 2026
Webinar: Hacking LLM Applications - Experiences and insights from our pentests
AI is revolutionising our world - but insecure LLM integrations are an easy target for hackers.
While companies are rushing to deploy Large Language Models (LLMs) in their applications, attackers are often even faster. In this live webinar, we will share our experiences from penetration testing our customers' LLM implementations and discuss basic LLM security testing methods.
We will introduce a ground-breaking technique for prompt extraction - including a live demonstration - that turns previous security assumptions about LLMs on their head. You'll see how real attackers can reveal sensitive information based on model output alone - despite established security measures. Based on the latest research, we'll show why current tools can't keep up, how these methods were discovered and what you can do to stay one step ahead of attackers.
