HackThisAI#

May 2024: The AI security CTF challenge has evolved. HackThisAI is no longer under development. Go to Dreadnode’s Crucible instead.

I have been writing capture-the-flag challenges specifically for Adversarial Machine Learning (AML). AML includes attempts to evade, poison, or steal/learn about machine learning models. It covers a range of activities and is a young, growing field of study.

Years ago, I took an adversarial machine learning workshop. It was really interesting, but there weren’t many ways to continue practicing aftewards. I like the progressive system of CTFs for teaching technical skills and decided to try and adopt AML problems to that format.

I’ve started to prototype some easier challenges here. I’d love some collaborators. Eventually, I’ll pony up for some cloud hosting and front the challenges with CTFd or PicoCTF. I’ve also submitted it the “Call for Challenges” for DEF CON 30.

Training and educating people about potential vulnerabilities in ML/AI systems will help us build more robust systems. We increasingly see production ML/AI systems having unintended impacts on society. Increasing the hacker mindset of ML/AI engineers will help us understand limitations and build controls for these systems.