Speakers
Description
The rapid rise of Artificial Intelligence (AI), driven by advancements in Large Language Models (LLMs), has transformed how users interact with these systems. Unlike traditional AI, which operated discreetly in the background, modern AI systems enable direct user interaction. While this shift brings new opportunities, it also introduces substantial risks, such as prompt injection, misuse for illegal purposes, and the generation of unverified or inaccurate outputs (hallucinations). AI-driven systems face increased security threats, including malicious code execution, data breaches, and vulnerabilities that can result in false predictions, missed detections, or flawed decisions with severe consequences. This presentation addresses these challenges and discusses practical defense mechanisms. It is tailored for AI security professionals, cybersecurity specialists, and those interested in the latest threats to advanced AI systems and interfaces.