AI security testing (AI‑ST) uncovers vulnerabilities and exposures in AI‑enabled systems and applications by applying specialized assessments tailored to the unique risks of machine learning and generative AI. It includes offensive techniques such as automated generation and execution of adversarial prompts, as well as AI component scanning across model repositories, libraries, frameworks, and notebooks. AI‑ST also evaluates model behavior under manipulation, edge cases, and failure modes to identify issues like data leakage, bias, or unsafe outputs. By proactively detecting weaknesses before deployment, AI‑ST helps organizations strengthen resilience, reduce security incidents, and maintain trust in AI‑driven products. Typical users include security teams, AI/ML engineers, red‑teamers, DevSecOps practitioners, and risk or compliance groups responsible for safeguarding AI applications.