Hacking LLM
Virtual Learning:1,990€ + IVA
Duração:
2 dias
Próxima Data:
9 a 10 Mar 2026
Local:
Online
Descrição
The rapid adoption of AI and, specifically, Large Language Models (LLMs), has opened new frontiers in innovation. And in attack surfaces...As companies rush to harness the power of LLMs in applications ranging from customer service to data analytics, they often overlook the emerging security gaps introduced by prompt injection, data poisoning, insecure plugin designs, and more.
Our course directly tackles these new challenges. Over two immersive days, you’ll not only uncover high-impact vulnerabilities that could already be at work within your systems but also learn how to patch them before they result in breaches or critical data leaks. In addition, we regularly update our modules and labs to incorporate the latest security breakthroughs, proof-of-concept exploits, and real-world incidents.
This focus on cutting-edge threats and solutions means that attendees can return year after year for fresh insights, continually refining their ability to secure AI-driven environments as new vulnerabilities emerge.
*PVP por participante. A realização do curso nas datas apresentadas está sujeita a um quórum mínimo de inscrições.
Destinatários
- Security Professionals
- Back-End / Front-End Developers
- System Architects
- Product Managers
- Anyone directly involved in the integration and application of LLM technologies
-
Área: Cybersecurity
Programa:
Prompt Engineering
- What makes a good prompt
- How to write effective prompts
- Including reference text in prompt
- Few-Shot prompting
- How to give AI time to think
- Using Delimiters for Clarity and Security
Prompt Injection
- Nature of Prompt Injection Vulnerabilities
- Direct vs. Indirect Injection
- Real-World Exploits
- Impact and Consequences
- Defense Strategies
- Client-Side attacks
- Case Study: WannaCry
- LAB ACTIVITIES:
- The Math Professor
- Indirect Prompt Injection
ReACT LLM Agent Prompt Injection
- Understanding ReACT
- Tools Purpose in ReACT
- Tool Abuse in Frameworks
- RAO Chain Exploitation
- Prevention and Mitigation
- LAB ACTIVITIES:
- The Bank of NSS
Overreliance in LLM’s
Insecure Output Handing
- Defining Insecure Output Handling
- Recognizing Vulnerabilities
- Simulated Attacks
- Impact of Weaknesses
- Proactive Measures
- LAB ACTIVITIES:
- Report summarization application
- Network analysis agent
- Stock Bot
Training Data Poisoning
- LAB ACTIVITIES:
- Adversarial Poisoning Attack Lab
- Injecting Factual Information Lab
Supply Chain Vulnerabilities
Sensitive Information Disclosure
- LAB ACTIVITIES:
- Incomplete Filtering lab
- Overfitting / Memorization lab
- Misinterpretation
Insecure Plugin Design
- LAB ACTIVITIES:
- Insecure tool usages
Excessive Agency in LLMBased Systems
- LAB ACTIVITIES:
- Excessive agency with excessive functionality
- Excessive agency with excessive permissions
Pré-requisitos:
- Basic Understanding of AI: A foundational knowledge of AI and LLM principles and applications is essential.
- Familiarity with Programming: Some experience with coding, particularly in languages commonly used in AI development (e.g., Python), will be beneficial, though advanced proficiency is not required.
- Understanding of cybersecurity concepts: A basic understanding of cybersecurity threats and mitigation strategies will be advantageous.
- Laptop: AI labs are served in the cloud, access to python IDE is via Jupiter notebooks, the only hardware requirement is access to the latest version of Chrome or Firefox.