AI Security Interviews#
Many organizations are building internal AI security capabilities to evaluate internally-developed and integrated AI products and services. Here are my tips for candidates and interviewers navigating this specialty.
For Candidates#
Prompt engineering may have been the gateway, don’t let it be the terminal node. LLMs and their interesting security characteristics brought many new people into the field of AI security; that’s great. While you’re progressing in your bot-bullying skillset, take the opportunity to realize that the field had a maturing set of taxonomies, frameworks, and tools before the rise of LLMs. You’ll go faster and dive deeper if you start to dig into these and a familiarity with more conventional methods may help cover any blindspots you have going into interviews.
Build and deploy AI applications. They don’t need to be good, but try and find different machine learning tasks and modalities and enforce different deployment requiements. Deploy a RAG chat app. Add streaming support. Deploy an image classifier. Add support for different formats and dimensions. Performance can be subpar, but at least think about why and how you might improve it. These exercises give important context for understanding the multitude of potential ML products and services you’ll evaluate in your role. BONUS: Now that you know how to build an application, build an intentionally vulnerable application to sharpen your understanding of security boundaries and vulnerabilities. If you’re looking for a more defensive role, you can build defensive controls and detections into your application.
Hack stuff. This advice admittedly comes from the more offensive side of the house, but there’s a growing number of ways to learn and gain experience even if you don’t already work in an AI security role, e.g. CTFs or bug bounties. And if you have a security background but not much demonstratable ML background, go compete in some Kaggle competitions. Not only do each of these experiences teach you something, they give you anecdotes that will be useful for “tell me about a time when…” interview questions.
For Interviewers#
Clearly understand the scope along several dimensions.
Evaluation Domains. There are many domains along which we should evaluate any AI system: correctness, latency, maintainability, fairness, explainability, safety, security, and others. You will be best served by clearly defining roles and responsibilities around these domains and precisely scoping job reqs for prospective candidates. This optimizes your applicant pool and helps clarify internal priorities. Many organizations are not thinking granularly and precisely about differences in these domains, but a successful fairness candidate may have very different skills from a successful security candidate.
Organizational Maturity. Assess your organization’s maturity, both for wholistic security and AI security specifically. This will help you identify both the right seniority (does this person need to build the team, write policy from scratch, etc.) and speciality (digital forensics and incident response, security automation, architecture, offensive security, etc.).
Be willing to grow your ideal candidate. There aren’t a ton of candidates out there with many years of experience. Be willing and have a plan to upskill and mentor folks to build the capability you need. This should also involve some introspection into your organization. Do you have great ML talent but lack security? Then you can accept some risk with the candidates’ ML background. Are you a security org without much AI depth? Then hire for ML chops and teach the security aspect on the job. This introspection should shape your interview strategy. Part of your mentorship plan can also include planning and budgeting for external engagements. There are great industry AI red teams at places like Microsoft and NVIDIA and networking with those folks at industry conferences and events can be a great way to help ensure your new AI security hire has mentors and context.
Thinking, not tools. There’s a much-needed boom of tools for AI security tasks, but the field is expanding so rapidly that these tools rarely tailor-fit your requirements. Interview for flexibility and ability to think through both AI and security challenges from first-principles. Your interviews (and candidates’ responses) should be focused on optimization techniques and integrity controls, not specific toolkits.
ML DS&A. If you need to evaluate a candidate’s technical depth, throw away conventional data structures and algorithms questions; there are plenty of more relevant examples that will tell you more about a candidate. I was once asked “Tell me about PyTorch dataloaders.” It was a great question – we got to talk about tensor dimensions, batching and shuffling. You could even pivot this question into security contexts around data poisoning and other training-time attacks. The same interviewer asked me “What is an n-gram?” which also has plenty of nice rabbit holes to dive into.