WEBINAR

Hacking LLM Applications - Experiences & insights from our pentests

18. February 2026 | 10-11 a.m. | Online

Language: English

Register now free of charge

AI is revolutionising our world - but insecure LLM integrations are an easy target for hackers. Find out how you can protect yourself effectively.

While organisations are rushing to deploy Large Language Models (LLMs) in their applications, attackers are often even faster. In this live webinar, we will share our experiences from penetration testing our customers' LLM deployments and discuss basic LLM security testing methods.

We will introduce a ground-breaking technique for prompt extraction - including a live demonstration - that turns previous security assumptions about LLMs on their head. You'll see how real attackers can reveal sensitive information based on model output alone - despite established security measures. Based on the latest research, we'll show why current tools can't keep up, how these methods were discovered and what you can do to stay one step ahead of attackers.

Information about the webinar

Agenda

  • Welcome & Introduction
  • Findings from pentests with LLMs
  • Methods & approach to testing
  • Latest research results
  • Live demo
  • Questions & Answers (Q&A)

Target group

  • Security leads and security teams
  • AI/ML engineers
  • AppSec specialists
  • DevOps engineers
  • CISOs
  • Those responsible for product security
  • Technical decision makers who develop or protect LLM-based systems

What you can expect in the webinar

Insights:

Security risks we discover in real LLM deployments - straight from Claranet pentesters

Expertise:

How to perform security tests of LLMs compared to conventional penetration tests

Research findings:

What the latest research reveals about vulnerabilities in generative AI

Live demo

How system prompts can be extracted from model outputs - including a live demo of a new attack

Ideal for anyone who wants to make their LLM integrations secure and future-proof.

It was a very practical and hands-on experience, full of realistic pentesting exercises, exploring how attackers can exploit AI integrations and how to defend against them effectively.

Private Delegate, Mastering LLM Security - November

The content and the labs were all phenomenal. I would without a doubt, definitely recommend this course to anyone interested in AI/LLM security.

Private Delegate, Mastering LLM Security - November

Speaker

Warren Atkinson, Security Consultant & Trainer

Our expert Warren Atkinson specialises in Windows exploits, reverse engineering and Python-based offensive tools and conducts in-depth research into LLM security. Warren has developed the new "Hacking LLM" training course and runs many of NotSoSecure's cloud, web and infrastructure hacking courses. He regularly trains at international events such as Black Hat and other leading global security events.

Would you like to delve deeper?

This webinar provides an initial insight into the content of our two-day Mastering LLM Integration Security: Offensive & Defensive Tactics training course. You will learn hands-on how to secure LLM-based systems holistically and how to recognise and defend against attacks independently.

Learn more

Kostenlose Anmeldung

Loading...