8 July 2025

Hacking LLM Applications: latest research and insights from our LLM pen testing projects

As organisations race to adopt Large Language Models (LLMs) across a wide range of applications, attackers are racing even faster to exploit them. While the promise of generative AI is enormous, so are the risks—and many of them are still poorly understood.

In this live webinar, we’ll share insights from real-world LLM penetration testing projects and walk through some of the fundamentals of testing these systems. Learn directly from Black Hat trainers and penetration testers about how they assess LLM deployments, what vulnerabilities they commonly discover, and the unique security challenges posed by these rapidly evolving technologies.

Why Attend:

  • Understand key vulnerabilities within real-world LLM deployments
  • Discover how security testing for LLMs differs from traditional pen tests
  • Learn how real-world threat actors exploit generative AI
  • Hear about the latest prompt extraction techniques, based on cutting-edge research
  • Watch a live demonstration of a novel prompt-based LLM attack

What You’ll Learn:

  • What we’re seeing in pen testing LLM applications
  • Approaches to security testing of LLMs vs traditional pen tests
  • What the latest research reveals about generative AI weaknesses
  • How system prompts can be extracted from outputs (Live demo of a new attack technique)

Your host

Warren Atkinson

warren atkinson

Warren is a Security Consultant and Trainer at Claranet for NotSoSecure Trainings.

He specialises in Windows exploits, reverse engineering, and Python-based offensive tooling and has been undertaking in-depth research into LLM security

Warren has built our new Hacking LLM training course, and delivers many of NotSoSecure Training’s Cloud, Web, and Infrastructure Hacking courses, regularly trains at Black Hat and other leading global security events


Register now

16:00 - 17:00

Online webinar