- Attacks
Jailbreaking: The Researcher's Playground
Tim Van hamme • - Attacks
Unmasking Prompt Injection Attacks
Tim Van hamme • - Attacks
Real-world attacks on LLM applications
Thomas Vissers • - Hallucinations
Hallucinations: LLMs' major reliability problem
Thomas Vissers • - AI Security
LLMs: The Next Frontier in Cybersecurity?
Jo De Brabandere •