Thoughts: CISA JCDC.AI Tabletop Exercise#
Background#
Here’s some background on the tabletop.
Thoughts#
This exercise was a valuable way to explore risks and concerns. CISA did a great job running it and we should do more.
Rather than simply asking ‘how can we enable public-private collaboration?’, we need to delve deeper into identifying the real challenges at hand. “Information sharing” is too broad of a problem. There are many situations, types of information, and potential combinations of relationships between sharer and sharee. What specific problems does each entity face that could benefit from more open information sharing? For example, identifying what critical data or insights a party lacks can reveal the underlying incentives necessary for them to engage more fully. By understanding these barriers from the ground up, we may discover that the issue isn’t the absence of information channels, but rather that the perceived risks of sharing information currently outweigh the potential benefits.
While it’s easy to paint broad strokes like public and private organizations, in reality there are many colors of both with diverse access to actionable data, risk profiles, and priorities. The DoJ may be viewed differently from DoD, for example – vastly different missions, authorities, and data. Cloud Service Providers, AI SaaS, and security product vendors are all very different – vastly different supply chains, missions, relationships, and data. A broadly scoped collaborative group could end up containing industry partners, but also customers, competitors, and suppliers. In fact, incentive for collaborative membership is not uniform internally among public or private participants, not to mention spanning that divide. Even public sector agencies sometimes have trouble sharing relevant data (before considering any public/private divide that may exist). There’s a lot of relevant game theory for information sharing in that sort of environment and we may need to consider much more granular information sharing and sanitization protocols than simply membership to one collaborative (but ack you have to start somewhere).
Is AI (adj.) really that different? What makes an “AI Security Incident” different from a “traditional” cybersecurity incident? I think this is one of the areas that’s helpful to explore through these exercises. My initial impression is that AI is simply a technique that’s going to be applied to a growing number of fields (i.e. cybersecurity, healthcare, finance) and those fields’ processes and information sharing structures are robust enough to adapt to any additional constrains (like pace) imposed by the adoption of AI . We shouldn’t treat AI like a noun; it’s just a technique that’s going to be applied in an increasing number of places.
Defense in depth is still king and detection is still a must. Many security dialogues focus on prevention, but folks who have worked these problems know that detection is actually where you make significant gains. Assume breach, reduce time to detect, contain, and recover.
To be clear, I was (and continue to be) an early supporter of JCDC.AI. A rising tide lifts all boats. I think every participating organization brought valuable perspective to the exercise. When I go about my AI security work, I know that there are times and scenarios where I might engage with any or all of them, but it’s hard for me to define an information sharing protocol that would apply in-broadcast. That is the challenge in front of JCDC.AI and I look forward to helping them move the needle.