Agentic Insecurities with Keren Katz

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

And despite my somewhat gravelly voice, I am beyond excited to be joined by Keren Katz. Keren is the senior group manager of threat detection, product management, and AI at Tenable and is also a contributor to OWASP, which is the Open Web Application Security Project. Did I get that right, Keren? [CHUCKLES]

KEREN KATZ: Yeah, it's "oh-wasp." That's the nickname.

KIMBERLY NEVALA: OWASP-- even better, easier. Excellent. So today, we're going to be talking about the threat landscape presented by AI agents in their current LLM-infused incarnation and what organizations can do to address it. So welcome to the show, Keren.

KEREN KATZ: Thank you for having me. I'm so excited to be here.

KIMBERLY NEVALA: We are most excited as well. Now, before we jump into all the fun shenanigans around AI agents, I'd love to know what initially sparked your interest in working in what is sometimes the Wild West of cybersecurity, in particular.

KEREN KATZ: It is the Wild West. Actually, I just found myself in this domain because I've been working in the intersection of AI and security for the last 12 years.

I started as a founder myself, ML based startup. And it was the days when it was only beginning. So I had to explain to ventures what it means that the machine learns and what it learns. What does it mean? How can the machine learn? So I had to explain that from the very beginning. And then I went to the special operations and in there, I exhibited really state actors that were in the security business. So I fell in love with security. And after my service, I've been at Insignia, where I've been leading the MXD product that they had.

So I've been seeing a lot of security and AI during these years. And it all came together, and the stars really aligned, I joined Apex. Apex was founded two years ago, and it was founded by, backed by Sequoia and Sam Altman. So it all sounds very promising and the team was amazing.

So I saw the market, I've learned it. And I truly recognized that there is a different gap here compared to other security landscapes that I've witnessed through the years. And what I felt is like, as an AI first technology, let's say, I felt like the urge to come in and bring my past experience into this domain to really help out companies.

KIMBERLY NEVALA: Yeah, you've gotten a fascinating front-row seat to a whole range of threats and misdemeanors and malfeasance that most of us, thankfully, probably never have to know about. And it certainly gives you a very unique perspective, I think, on the landscape as a whole.

In that vein, you've said that AI agents, as we think about them today, redefine risk. Not as breaches but as success in the wrong context. And I thought this was such a beautiful turn of phrase that gets just right to the heart of the issue and the things that organizations need to think about today when dealing with AI agents.

So perhaps you could tell us just a little bit about what you meant by that. And, even better, maybe illustrate that with some examples of what we're actually seeing happen today out in the Wild West, as it were.

KEREN KATZ: It is wild. I think that you said it correctly. And what's really distinguished the AI security landscape compared to the traditional security attack vectors is, in the traditional manner, we had to wait for an attacker to come in and to do something. To have this big, beautiful breach and to have a huge ransomware. And it all is so fascinating. But what we're seeing here in AI security, and specifically in agentic AI, is two different concepts that really make it different.

The first one is the insider threat that's been amplified by the agentic AI. What I mean by this is that when you have access to, let's say, Microsoft Copilot, Google Gemini, applications that are the bits and bytes of nowadays Fortune 500 companies, everything is so accessible.

So we know that RBAC was never successful. And it's very disappointing to say so, but that's the truth, right? Because people will open a new file, people will get go and get this presentation. And they're not aware of RBAC and settings and permissions. They just want to go in and publish it around the entire organization.

Back in the day, if I was an insider, I was just curious and I wanted to find something interesting, I had to really look and dig through my entire, let's say, SharePoint or Google Drive. Now I just need to ask the Copilot, for example, what is our upcoming earnings call? What is our forecasted earnings? I'll get a response.

So I was thinking about it before I was getting to see real users' interactions. And I was like, no, no, no, it's too fast. It wasn't going to happen. But I see it on a daily basis. I actually see Fortune 500 companies and users from these companies that are asking these questions. And they just get a response. So nothing, let's say, malicious happened, no abnormality, just the way it is. But some data was exposed here, right?
And the big problem is that once it's a normal behavior and combining it with the fact that the AI security landscape is not yet monitored the exact way we would want it to be, that combination is very, very risky and powerful. Imagine if earnings call data will be published before the exact time it should be. Imagine what will happen to this company, right? We don't want to even think about it.

So that's the first concept that I think is different. And the other concept I see that is different is the fact that AI is not deterministic. So I was playing around with ChatGPT, normal ChatGPT, no manipulations. I asked him who is the best singer of all time. So the first response was - I think it was, who was it - it was Queen or something like this. And the other response was Britney Spears. So it's not consistent.

I've asked the exact same questions. And the two answers might be valid in this case. But it just means that we cannot know what AI will answer because that's how it works. And I feel like most people are not yet aware of it. Because when we define agentic AI and we automate a specific use case in our company, we cannot know what the result of this automation will be.

So if I'm having an automation to update a financial or legal file, I can't know what will be eventually in that file. And the problem is that since monitoring and detection are still young in this field, we do not know which of the files in our environment are touched by AI. So what can happen and what we see in Fortune 500 companies is that some files or some data in an organization just changed. And we don't know how these odd values came to these files.

And then the worst thing is that if someone's asking about this data - imagine the accountant working on the new report of the company and he's asking about financial data of the company - and the financial data is being updated by the AI to the wrong values. And this accountant gets the information, but it's the wrong information. So imagine that happens. Another very, very common example we see is with sales pipelines. So think about AI that updates sales pipelines and automatically sends emails. But to the wrong lead.

Think about such examples. And that's when everything works as planned because we know we're defining our organizations as automated workflows. We know these insiders have access to the data. It all works by the RBAC, the way they define it. So that's when no attacker is in the picture. And it's already happening.

So when I'm talking to security professionals and then I do this exam or this analysis of their environment, they are shocked. They're like [SQUEAKS] I was thinking there are no attackers in this space. I was thinking I'm safe.

So first of all there are attackers in this space. And secondly, it's not only the attackers that we are afraid of.

KIMBERLY NEVALA: So really quickly, for folks who are not familiar with the term, what is RBAC?
KEREN KATZ: RBAC, it's like rule-based access. So it means, usually, when you define access to a presentation or to a file, you usually define who are the people or folks who are able to get access to it.

So most of the time we would want to think that it's well defined and only the people who can access it are configured to access it. But most of the time, what we see in reality is the unfortunate use case that people do not deal with it. They just create a presentation. They do not define and configure the exact people who can reach into it. Usually, the entire organization can see it. And we don't want the entire organization to see it, most of the time.

KIMBERLY NEVALA: Yeah. Or you're going to over index the other way and lock things down so tightly that nothing works. So is it true, then, that implementation of AI agents or agentic AI, pick your term, might be the fastest way for organizations to find out just how good or bad their current security protocols are or are not?

KEREN KATZ: Yeah, I feel like it just amplifies and surfaces everything up. Because it shortens what we call dwell time. We usually refer to it for attacks, when an attacker is trying to expand his behavior and his ability to catch as much data as possible in the environment.

But dwell time for me in the AI landscape will be the time when a user is curious and wants to get an answer until he gets that answer. So if he had to dig in the old days and to search and to look for specific stuff, now it's so easy.

Another question a user asked is, he asked about a specific person. Let's call him Y. So he asked, “is Y a good manager?” And he got the entire HR report about this person, who was his manager. By the way, he was a very bad manager.

So we don't think about this in our daily lifestyle because CISOs are trained to think about attackers, about ransomware, the big, cool stuff. But in fact, it's everyday security. And I feel like it's not about only security but also about governance. It's also about how we protect ourselves as organizations. So I don't feel like it's only the responsibility of the security people but it's our shared responsibility as both builders and protectors.

KIMBERLY NEVALA: And these factors that you talked about. We do think about, let's say insider threats or users, as a vector for fraud or other forms of malfeasance. But what you're talking about here is users not necessarily operating in a way that is intended to be fraudulent or to be criminal in any sort of way. That if we're not really careful about understanding and managing user expectations and also user intention, that we could get in trouble. That in the pursuit of efficiency, we could actually run into cases where the flip side of that becomes exploitation, whether that was intentional or otherwise. Is that right?

KEREN KATZ: Yeah, it's exactly correct. I feel like, back in the days, a user had to be very malicious and to have a specific intention of like, I want to get access to this type of data to exfiltrate it later. But now a user can just get this data just being playful with AI. And that's what is scary the most, as I see it.
And if we're talking about legitimate users, then think about the nonlegitimate user, the one that really wants to get some information, that's one that really wants to take that information and then do the ransomware. So I call that the "2025 ransomware," because, back in the days, you had to really exploit these servers and do a privilege escalation and get into compromised users to get into the really core data. Now it's so easy. I will just ask the Copilot. I wouldn't need the most privileged user in the network. I could just gain access to the lower user in the network and get some precious data.

KIMBERLY NEVALA: And this actually probably then-- I should ask this as a question. Does this also apply to viewing customers as a potential threat vector as well?

Again, not because they're, we'll put together the malicious hackers aside, who probably see this as pure gold, the opportunity here. I know you, just in the interest of the work that you do, do some hacking, and have said it is horrifyingly simple at times to get into places that you really, really shouldn't be able to.

But we have these simple examples of customers asking questions really innocuously. Do I get a refund for something? For instance, a chatbot that's not well guarded and well bounded, saying of course you can. Or being able to manipulate pricing or other deals. Again, not necessarily even with intent, just trying to engage with the system in a way to make a deal or to figure out what the best path is.

So is that also something that organizations need to be looking at from the outside in? Which, again, is that non-intentional yet very directed type of action that can lead to erroneous outcomes?

KEREN KATZ: Well, I think, in most of the cases, you need to be intentional to get something that the AI is guarded not to do. Because most of the cases, when you do have a client-facing AI, you will usually protect it with a system prompt and with specific instructions that the AI follows. So most of the time, those are good for the non-intentional users.

But there are plenty of, I wouldn't say malicious, but of users who want to take advantage of the fact that they're talking to AI. We all want to get things free. We all want to leverage ourselves. So it's already happening. I think you surely heard about the user who bought a car, a luxury car, for $1. So it's already happening. And you shouldn't be that sophisticated to bypass the guardrails. From a few questions, a smart person would understand how to do that.

So sure, it's an entirely new vector of how you manipulate the guardrails of the AI. And we guard it from injection and jailbreak. These are the most common attacks that people know and are aware of. But it isn’t that dramatic because, most of the cases, it's not that complicated to bypass the guardrails. That's what I'm trying to say here.

KIMBERLY NEVALA: Yeah. So we are going to talk about what organizations can be doing to expand how they govern and their oversight of these systems. Before we go there, there I know that you recently, or in conjunction with OWASP-- is that correct?-

KEREN KATZ: Yeah.
KIMBERLY NEVALA: -- recently put out a GenAI security report on the state, specifically, of agentic AI security and governance. Can you talk about that a little bit? Were there any findings in that report you think folks would find surprising or that you, as a security professional, found surprising? And then we'll talk about some of the recommendations that are included there, as well.

KEREN KATZ: So I think the main thing we try to do is to really separate between different use cases of agentic AI. So we talk about the enterprise AI, which has a lot to do with the insider threat we just talked about. Of course, it has a lot to do also with adversarial, such as context injection.

Context injection is a huge attack vector that's just been addressed by Google in June. Who actually told that they see this threat as applicable to Google Gemini. And of course, it's relevant to any RAG-based system.

So it means that if you send an email to a user that is using Google Gemini, this email can be stored as a context in the RAG system of it. And then, when the user asks a specific question that's related to this information, Gemini can collect this email or invite or file as a context to your response.

So if I put prompt injection in this email and I'm external to your organization, I can send it to any organization now. You will not know that. And once it gets into the AI engine, when it's being collected for context, at the same time that you're asking the questions - it could be months later - the AI will do what I want it to do. So that's another huge attack vector of enterprise AI.

We talked about client-facing interfaces. So I think we covered that, as well. But AI threats are a big thing in there. So either to get sensitive data - imagine almost any company has this client-serving AI. And that usually comes with databases that store data about clients because it serves clients. So imagine if I'm successful at leaking old information and extracting all the sensitive data from it. Ransomware of 2025, right? I don't need to be that smart. Sorry for saying the truth. I don't need to be that smart now. So that's about that.

And we talked about multi-agent systems. So we got MAS, which is some sort of agent threat that are very coupled. They're highly coupled. And then we’ve got other agentic ecosystems that are loosely coupled. So these are two different examples.

So we see lots of MCP connections. I guess you're hearing the word MCP almost every day now. But I feel like MCP is just one type of agentic protocol that we can execute tools and call to another AI or to other services from our main agentic applications. So we talk a lot about it and what we want to say about that is that it's just another input and output stream. So any AI threat is applicable to it, but there are more attacks that are applied to it.

So we all know the HTTP and all the types of protocols, attacks. They can now come in a new form because imagine there's the simple attack of HTTP. So imagine if someone is saying it is the tool of, let's say, Salesforce. But they're not. And then I'm transferring and sharing data with Salesforce tool. What if it's not Salesforce? What if I'm getting back data that is corrupted, it's manipulated, or containing just jailbreak? So all these questions are being answered. But it's only the tip of the iceberg. It's only the beginning.

KIMBERLY NEVALA: [CHUCKLES] I'm finding myself increasingly terrified here, Keren, by what's really happening. So can folks go to that report also and see that taxonomy of agents? And will that help organizations think through the different scenarios and perhaps different, I don't know if it's really different frameworks, but different elements they need to put in play to protect these systems?

KEREN KATZ: Sure. The purpose of it was because we are also overloaded with information and LinkedIn posts. Everybody are talking about MCP. Who knows what MCP is? We wanted to put it all in one place and talk about the use cases, the risks that are applied to each of the use cases, and what should you do?

KIMBERLY NEVALA: So let's actually talk about this because I originally found your work through an article you had written about expanding governance. What can we do about it? And what should organizations be thinking about? Are there a few-- I don't know if there are principles or tactics or approaches that we need to lean into? Because one of the things that I also find from your work is that you're very clear that these are not superhuman or super-powered systems. Not only can we, but we need to, have control over them. Is that correct?

KEREN KATZ: It is correct. So I feel like the first foundation will be visibility. I'm talking to, I'd say, tens and hundreds of CISOs. And what I hear from CISOs and executives is, I don't know what AI is running in my organization. I don't know.

So that will be the first layer. Before we know which are the use cases, we need to know what AI is touching my data? Because think about application security. Back in the day, we wouldn't let any service run in our environment without us knowing. Or, as an executive, we wouldn't have a big technology running in our environment without us knowing. So that will be the first place to start.

And then I think, after knowing it, we need to ask ourselves, what are the use cases? So what is the AI being used for? What are the intentions of the users using it? What are the risky users? Because in every organization, there will be the users that, from what I see, there will be the users who are just working with the AI, uploading files. Of course, we need to see how we control our sensitive data from getting out of the organization. But they're doing what they should be doing. But there is a different group that will be trying to get sensitive data, trying to get access to what they shouldn't have access to.

And also, we see a trend of people that are over-relying on AI. So we see people that are trying to make hiring decisions with AI. Should I hire this candidate? And we see people trying to do investment banking using the AI. So think about investments. Firms are investing in specific-I’d say specific stock, specific companies, because AI says so.
AI can make mistakes, right? And it seems to be so authoritative and so respectful that we all trust it. But we should question it. So we should at least notice users and educate them that they should think twice before just going for the AI recommendations.

So we need to understand, what are the use cases? We need to understand which data the AI touches because the data is at risk. And that's OK because AI is everywhere. If you wouldn't involve it and if you wouldn't incorporate it in our core operations, we will be behind there. We will not proceed. We will not be the leaders.

So we should involve it. And we should have AI in our main processes. But we should have specific guidelines about it. So for example, we shouldn't let the AI just update, in my opinion, our most critical database. Or we shouldn't let it automate our most critical workflow. We can, however, have use cases in which the AI recommends it, and then we have humans in the loop. But that's my perspective.

Some people will say, OK, I don't care. I'm OK with paying the price of wrong operations, of shutdown of my operations, of the public seeing me in a specific light. But I feel like the approach of having the AI in your most critical operations but monitoring it and defining which tools and which actions are allowed, and which need and require a human supervisor, I feel like that's the right balance of it. Where you earn a lot of the value it can give you but you're also in control of it.

KIMBERLY NEVALA: And so when you talk about intention detection or understanding intention and understanding the use cases, is this primarily then a process or an auditing mechanism that happens? That strikes me as not something that's necessarily systematic. Is that correct? And does this then say that our security professionals, your CISO teams, need to be much more actively involved and engaged earlier in the development of these types of applications and these types of software than they might otherwise be in a traditional development life cycle?

KEREN KATZ: I wish we could do that. I wish because I feel like maybe if we were talking six months earlier, a year earlier, I'd say, yeah, sure, they should be in there.

But now the pace is so fast. Any developer, any user now can just connect to an MCP server, work and create an agentic workflow, and connect it to any data of the company. And don't think about developers. Any people from marketing - it shouldn't be a developer - can use the no-code platforms and just connect the organization's more sensitive data and operations to AI now. So I feel like you cannot keep up by knowing from the users.

But you should use external tools as you use for any other security domain to really analyze the data. And personally, I've been involved in intention detection from very long ago. I feel like it's been a year and a half since we started to talk about it because that's what we've been seeing in clients' environment. And there wasn't any solution to it back in the day.

So it took us a while to understand how do you even define or how do you even make families of what user intentions are? So we really worked with lots of firms to understand their day-to-day behavior, to learn about how people are using AI, to create these intention detections. Also, what we understood is that it's kind of custom to each company. Because in a finance firm, the people wouldn't be acting the same way they would be acting in, let's say, a health care company. And, of course, not the same they will do at a restaurant or a dining company.

So there are specific intentions and use cases that are unique to each company. And my main recommendation will be to think of what are the use cases that are unique to your company? What are the most sensitive operations for your company that you want or don't want people to do?

And what we build specifically in my company - and it's not like an advertisement; it's not like an ad - it's just how we see it. Is that it’s sometimes hard for security people or an executive to understand and map all these sensitive operations in your company. Because think about the Fortune 500 company. That’s usually big companies. And you don't know about any new, let's say classified, project that is going on.
But what if people are asking about it and getting responses? So how can you know about that? So we really work hard to build models that will surface such use cases that people are talking about. But it's really hard to keep track unless you get such visibility.

KIMBERLY NEVALA: You also said, though, something interesting, somewhat in passing, which is we really don't know, in a lot of cases, where AI is being used. Everyone is a developer today. So maybe you have Copilot, or you have some other systems that are available, and people have hands on. What is it-- and you're right, typically our production environments are very locked down in organizations.

What is it about, it's a little bit of a tangent, about AI do you think and about this moment that has allowed that to proliferate? I mean, we've certainly talked about shadow IT in the past in different ways. But perhaps not with applications or technologies such as this that really could have such a profound impact at the core of the business. How did we get here? What is it that allowed that to happen?

KEREN KATZ: I think AI became the cool kid, you know?

KIMBERLY NEVALA: Yeah.

KEREN KATZ: In the old days, if you wanted to adopt new technology, you had to be an early adopter.

But first of all, Gen Z, our gen, became way more early adopters in their nature. And I feel like it's not only Gen Z. AI is here and if you won’t be adopting it, you will stay behind, and people understand that.
And people see how this is being an ease on their daily operations, how it's really fast and has improved their daily lifestyle. So I feel like everybody wants to adopt it.

Comparing to other technologies, I feel like since the no-code platforms became a thing, it is so super easy. And you know, I was fascinated the first time I was looking at Microsoft Copilot. And the team has done such a great job because you can connect it to anything. There are hundreds of tools you can connect Microsoft Copilot to.

But it is a security mess because there is no chance any security professional knows to this day which tools are available in there. I had to look at it for hours a day. And we’ve got a team of analysts working and looking at it to understand and try to estimate the risk of each of them. So it’s just really a huge opportunity, but a huge, huge, huge risk in here, too.

KIMBERLY NEVALA: And no doubt someone will suggest or come up with an AI tool to try to monitor and catalog your AI for you. Which seems to speak back to what you were talking about, the multi-agent compounding amplified risks.

So if we think about this in terms of the more systematic actions that people can take and maybe some of these are actions that we use in other areas but haven't applied to these types of AI applications or agentic workflows.

One of the things you talk about is real-time monitoring, and not just inputs and outputs. Can you tell us a little bit about why people might default to thinking I'll just look at what's being put in and spit out of the system and out of the workflow? And why that's not sufficient and what we should be doing instead.

KEREN KATZ: So it's super interesting because this domain evolves so fast that my mindset shifts, as well. Because I was really into in-time monitoring. And I'm still thinking it's a very, very important block for recognizing and detecting real-time attacks.

But I feel like since I wrote that Forbes article the industry shifted and everybody is now creating agents. And in each organization that I see, the vast (number) and volume of data around agentic AI, and not only LLMs, it's so wild that I feel like, in real time, it will be very, very hard now to manage it. Ideally, I would want it to be monitoring in real time. But I feel like you need to have a more preemptive approach because in real time it will be super hard to manage it all.

So what I'm thinking is more practical now is to look at your users. What are the risky users? What are the risky applications that are not monitored yet and you need to go and monitor? I'm talking about Chat or AI. So these are the applications that you usually wouldn't know exist and that are going live in your environment. So monitoring those.

And maybe, in some cases, ask to work in specific applications that are more safe for you, that are monitored, and you can have visibility into. Most of the time, it won’t work because people want to work on the tool that they love and they want to work on. But try to monitor those tools, as well. So that's what I say.

I would say to look at risky conversations because usually an attacker does not use just one prompt. But it will be a more systematic thing. You will do reconnaissance. Reconnaissance, it's like a phase of attack that you usually try to get resources and to understand how the system you're attacking works. So you usually see users asking about, hm, which model do you use for AI? How do you work? How do you call for tools? They will be asking the AI such questions.

So I will be putting lots of attention into mapping risky interactions, mapping risky users, and mapping risky applications or critical workflows that are using agentic AI, and then addressing them upfront. I'd say be proactive instead of just being reactive to it.

KIMBERLY NEVALA: So even with the best proactive, I guess, surveillance and interaction, some of those things may slip through. Or there may be some intentions that in fact turn out to be risky even if that wasn't the user's intent initially. Do you still think about and work with companies about how to track the overall activity or behavior of agents like we would other types of system activity?

KEREN KATZ: Indeed because I come from the XDR domain, where you track users and users' behavior. And it came to my mind a few months ago that the new user behavior analysis will be now the agent behavior analysis. Because we were talking about the vast volume of activities that's happening.

And it's not necessarily that an agent is like a malicious agent or a compromised agent. But sometimes it wouldn't have the right context. It will have too excessive agency. It will have too much data, too sensitive data that it can approach, that it can touch. So it depends on this type of combination.

And sometimes the proactive thing will be, I limit this agent's tools and actions to be only read-only. I don't want this agent to update anything or to do any writing. So that would be the proactive one. But also, the reactive one will be OK, so I'm tracking this agent. I see that, if I already limit its permissions, I see the way that users interact with it. I see it lacks context. I see maybe it is based on a weak engine, like Deep Seak engine. I don't want my agent, personally, to be based on DeepSeek.

So it's a combination of lots of events and lots of properties that we want to correlate into which agents are the risky and which are not. Because we have so many data out there. And I feel like what we can do as vendors is to bring out only what is actionable and the bottom line of it.

KIMBERLY NEVALA: Are there other guardrails that you recommend for clients or that organizations should be considering that we haven't touched on? That, I mean, maybe these are things that they apply in other areas or that are just net new processes, practices, systematic interventions that are top of mind for you at this moment in time?

KEREN KATZ: I think before even looking and having the visibility, really thinking from the enterprise perspective. What is important to my company? OK, so where do I want to involve AI? And I understand there will be risks that apply to it. But it's worth it and I want it to be in there. And then, how do I want to do it? And what are the guard rails that I want to put? And then where do I want to not involve AI?
And most of the companies, like 90%, maybe 100% of their activity will be I want to have AI all over the place. And that's OK. But then, how do I want to have it? Which users can use it? How do I see which data is connected to it?

So asking the questions before understanding and seeing everything, that's also a good step. Because you want to understand and you want to figure out your organization policy from thinking about the business perspective: what is critical for the business and what are the impacts.

Because there's another interesting aspect that I feel like companies haven't figured out yet. What they do in case of AI breach, because it's very interesting, because there's a use case. It's so funny to me [CHUCKLES] but one of my clients, there had been a jailbreak attempt in their environment, Like a real jailbreak. And I helped them look at it. And I had them sorting into it.

And they asked me, OK, that's bad. What do I do now? So they haven't sorted the plan yet. And it's super interesting because it's a big firm, an amazing firm, super professional. But it feels like people are so lost in understanding what they should be doing. And it's super odd because in other security domains, there is an incident response plan before there's anything else on the table.

So I feel like that's super interesting. And I feel like you need to understand how the AI interactions and activities, I'd say, how they come in place into your entire organization, workflows into your network's activities, into the user identity activities, and how it all comes together and what's the effect of it.

So understand the risks first. And that's why I recommend building your own policy. Mapping what are the risks and talking very, very specifically. So we know the data exfiltration, it's scary. We know ransomware is scary. But let's talk about specific agents we know.

And of course, you do that when you build the policy. But then you do that again once you understand how AI is being used in your organization. In this specific use case what could happen? So if that agent got jailbreak from the user, which data are you connected to? I want to see if this data was exfiltrated. I want to look also in the network to see if this data got out of my organization. I want to know if the ransomware is behind my door.

And then think about which actions that this agent can do. So maybe it's actually triggered malicious or risky actions. I need to look at the processes related to it to understand if I need to stop specific operations. If I need to make sure nothing happened. So understanding specifically the implications of each agentic use case is the key in here.

KIMBERLY NEVALA: And it seems to me that organizations don't need to get caught up here in that traditional chicken-and-egg issue. Which is, I don't know what my agents are yet, so I can't have an incident response plan. Because there are, what's the word I'm looking for, not profiles, but there are well-known attack vectors, right? So we know that there's something like prompt injection or data exfiltration. These risks or things that could happen are fairly well known.
Now, certainly what the particular implications are for a specific agent and a specific context might not be known. But it seems like thinking through how we would respond if this type of thing happened even outside of knowing explicitly this is agent 1, 2, and 3.

Then there's some combination of having that general plan and understanding of what that landscape is and what the different types of incidents that could happen are. Then mapping agents to which incidents might occur. And is that true? Because I foresee organizations potentially thinking, yeah, but I don't know what the agents are, so I don’t know what incidents to plan for--

KEREN KATZ: Right, and it also changes so quickly. Because you can have that plan and then, a day after, 10 new agents are being created. So I totally agree with what you're saying.

You can create and you should create a template of OK, so I know the data exfiltration can happen. I know that output can be manipulated. What should I be doing in each of these cases? And then, of course, there needs to be adjustments. It depends on the use case because it's an entirely different use case if the user successfully exfiltrated the data through the network after they got it from the AI. Or if the data that they got is super, super precious or if they actually triggered a specific, most risky action ever in your organization. Or they just sent an internal email to, I don't know, something very, very, not risky.

So of course it is fair. But I feel like usual security and traditional security differs as well. Because you have this plan for what happens if someone does this and that? But then, in this specific use case, you implement it, and it depends on the way that the specific use case happened. So it is exactly like in old times in this case. But I feel like it is a key to understand how the AI incidents really interact or impact the entire security's overall posture. And then, what should I be doing once it happens?

KIMBERLY NEVALA: And I think that is somewhat reassuring. Or I would hope that's reassuring for folks, as well. Which is although the types of threats and the vectors themselves might be changing, we actually do have proven practices for how to respond. And processes and teams and roles who understand security, who understand threat response, who understand remediation. And those can be applied.

So as much as we talk about this being a whole new world and the new - the expanding landscape, I should say, because not all of these things are new - and some of these new vectors, it doesn't mean we're starting from scratch. And I think that is where a lot of folks today tend to get hung up and it can get you into a bit of that analysis paralysis.

KEREN KATZ: Yeah, and I feel like another key practice that I will recommend to any organization is to get educated on AI because I see a huge difference of, let's look at cloud security. You shouldn't be a cloud, let's say, expert to understand mal configurations.

So I'm not coming to really make this look like a small trend. It is risky and it’s respecting its place. But what I'm saying here is it's super hard to understand how AI threats are working, what is prompt injection and jailbreak, without understanding how AI and generative AI in particular works. So I feel like getting the right people to educate your teams before anything happens is the key. It will give them the, I'd say, thinking so that they are able to deal with it once any threat will come.

KIMBERLY NEVALA: And that education also then helps with that early detection and thinking about user intention--

KEREN KATZ: Yes, all coming together.

KIMBERLY NEVALA: --and making sure that we're not inadvertently making that slide from, we were trying to pursue maximum efficiency and have slid down that slippery slope into exploitation - either willingly or unwillingly as the case may be.

So all of that being said, there is a lot changing and a lot going on. What would you say to the audience or even to security experts about how to keep up? And what should we be keeping an eye on as we move forward? Last words? [CHUCKLES]

KEREN KATZ: First of all, it's super hard. You really need to track it. You really need to work at it. So maybe one thing we haven't mentioned is to have people assigned to it. Because you need to be focusing on it. And I feel like a huge thing to look at is the technology. So it's not only to track the security and that a new adversary has been published. But also to follow the technology and then to try to analyze and consult the right people about how this technology shift, or how this technology advancement, actually affects the security posture.

KIMBERLY NEVALA: Well, thank you so much. I know that there is so much to be said on all of these points that we have just touched on, dipped our toe in today. But I really appreciate you coming on and sharing those insights. We will direct folks to follow your work, and including that OWASP report, which I think does lay out very nicely a lot of these terms and concepts.

And so while I can't say that these conversations, people necessarily find them comforting, I do think that there is a lot of value and security in knowledge. That is what you're helping us all get and I really appreciate that. And thank you for your time.

KEREN KATZ: Sure. Thank you for having me.

KIMBERLY NEVALA: Yes, absolutely. We'll have you back in the future and we'll see where we are and keep our finger on the pulse of this.

KEREN KATZ: We will keep updating, yeah. [BOTH CHUCKLE]
KIMBERLY NEVALA: Absolutely. All right, and to continue learning from thinkers, doers, and advocates such as Keren, please subscribe to Pondering AI now. You'll find us wherever you listen to your podcasts and also on YouTube.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Keren Katz
Guest
Keren Katz
Senior Group Manager of Product, Threat Research & Al, Tenable
Agentic Insecurities with Keren Katz
Broadcast by