Nadav Cornberg Podcast Transcript
Nadav Cornberg joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, Home of The Digital Executive Podcast.
Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand.
Welcome to The Digital Executive. Today’s guest is Nadav Cornberg. Nadav Cornberg is the CEO and co-founder of Eve Security, a platform that governs and monitors autonomous AI agents in real time, ensuring they operate securely, transparently, and within enterprise defined guardrails.
He is an expert in the emerging field of executive level AI agent governance, specializing in how organizations can safely deploy, monitor, and control autonomous systems entrusted with high stakes decision making. As enterprises rapidly adopt agent AI to operate across critical business functions, Nadav brings a pragmatic and technical perspective on ensuring these systems remained aligned, observable, and secure, particularly across what he calls the crown jewels a company’s most critical business systems.
Well, good afternoon, Nadav. Welcome to the show.
Nadav Cornberg: Good afternoon. Good to see you.
Brian Thomas: Absolutely my friend. Yeah. I appreciate that. Really do I know you’re hailing out of Austin, Texas, so we’re in the same time zone today and I appreciate that. And Nadav, if you don’t mind, I’m gonna jump right into your first question.
You’ve become a leading voice in AI agent governance and now Lead Eve security. What experiences shaped your journey to focusing on securing and governing autonomous AI systems?
Nadav Cornberg: So in my past when I was at Checkpoint, it was just when mobile devices were introduced into organizations, just when the iPhones came out, and at the time I was leading the effort of how do we now securely introduce this new technology that is owned by individuals?
And that was a new paradigm on concept of how organizations are gonna deal with all the capabilities that these device have. The needs that people have to now improve their day-to-day workload. You could do so many things from your phone, but just not from your laptop. And when you combine that with even my previous startup of how do you kind of trust the untrusted when we were dealing with just hospitality and different people coming in.
Those two, if I really need to think about the two problems that we’re solving with both of them is kind of an untrusted entity that has a lot of privileges in an organization. When you look at AI today, when I was thinking about this problem, it was very much, wow, we’re now introducing these strangers, the digital strangers into an organization, and we need to connect them up to these critical systems.
This is gonna be a challenge for CISOs to manage this definitely from a perspective of how things can go wrong with those critical systems.
Brian Thomas: Thank you. Really appreciate that. And I like the backstory here, your journey when you started out I know smartphones were just being introduced back, back in the day and when you worked at Checkpoint and I agree, technology makes life’s.
Life easier for us, for the most part, except for people like you and me, naab, where security’s a big concern. And, and I appreciate the fact that you’re trying to secure the world and make it a better place so we can continue to use these amazing devices. But at the same time, we gotta be on the lookout for.
The bad actors out there, so I appreciate that. And Nadav Eve security is focused on governing and monitoring autonomous AI agents in real time. What problem did you see in enterprise AI adoption that made this platform necessary?
Nadav Cornberg: At the time when we were thinking about the concepts of what we wanted to work on, it’s just when MCP was adopted by Anthropic and when we were talking with CISOs, it became.
Very clear that the ability to just hook up an AI agent to a new system is gonna be extremely easy. We know that AI agents will perform unintended behaviors. Combining that with critical systems like Salesforce, GitHub, what have you. It again, it was just very clear that if things go wrong, it could go real, like really, really bad.
When we’re looking at options that were out there, it’s not enough to have the classic, well, let me just notify you if something goes wrong. ’cause if you’ll send a message saying, well, guess what, production just got deleted. Well that doesn’t really help the company. So we really understood that for us to provide a solution that solves the CISO’s needs, it really needs to be something that governs it in real time.
So that. Things that could go extremely wrong from an operational perspective will be blocked. And that’s what we see a lot of times the differentiation. And when we have the conversations, that is where CSOs are driving the conversation to. I don’t wanna just know about it, I want to prevent it. I want to control it.
That’s where we focus our solution.
Brian Thomas: That’s amazing. Just went to a conference recently and, and talked to CISOs about this exact same thing, and you’re absolutely right, as you said. These AI agents certainly are introducing new problems for the security landscape, but which what you talked about here is having that governance in real time prevention is absolutely key.
You know, after the fact, while you may be able to, detect something and minimize that the damage that was done. It’s best always to prevent in advance and be more proactive, so I appreciate that. Nadav. The concept of agent in the loop oversight is central to your work. How does this approach differ from traditional human in the loop models and why is it better suited for autonomous systems?
Nadav Cornberg: I’ll say how it’s similar and how it differs. The similarities are really an agent need in the loop, really needs to do the same tasks in some way that a human in the loop does. The thing it’s, it’s about scalability. An organization of 50, a hundred people could have a thousand agents. It will not scale the need to have an agent in the loop.
And I’ll explain what does an agent in the loop do? Just like a human in the loop does. It needs to be able to assess, is this a normal behavior or is this some level of anomaly? Why is this employee work like operating this way? What’s the level of risk for the organization? Do I need to have a conversation with this employee on what they’re about to do?
And then do I need to maybe alter their way of thinking or the way they want to operate their task? When we’re dealing with AI agents, it’s very much, and this agent in the loop is very much like considered a manager or security manager in an organization. We are seeing people performing their day-to-day tasks in, in the, I think we have a pure intent of doing well for the business, but sometimes it’s risky from a security perspective, and that’s why you have people that are monitoring the security from a physical and employee perspective.
But when you look at AI agents, they’re very naive in the options that they want to complete tasks, and you need to be able to have something that’s governing them and saying, whoa, whoa, whoa. Why are you now trying to take this task? You’ve never done this before. Really alter them or guide them in the right direction.
And for that, you need to have a digital component that can handle the sheer amount of tasks that are being done and scale that’s gonna grow. And so human in the loop really needs to be safe for those critical moments where, well, this is a task that is so high risk and it’s tr trying to be executed.
And I, myself, as a agent in the loop, I want to escalate to what we call my manager, which is a human in the loop, to get them involved in this task. Really want to save those moments for a human to deal with those and not the thousands, if not millions requests that are coming in from agents at a given moment.
Brian Thomas: Thank you. Really appreciate that. And we talk about that all the time. Human in the loop, but in your case here. That these agents need to do the same tasks as humans. Obviously at scale, need to process and, and make these decisions like humans. But for those bigger decisions or those critical tasks, we still need to keep the humans in the loop for those.
Obviously, that’s really important. And really like how AI has, has. Really improved the way it works at scale with its accuracy. Now, I know a couple years ago it wasn’t quite there, but now it is really making a big difference and it is certainly a game changer. So thank you again, Nadav. The last question of the day, as we look ahead to the future, how do you see AI agent governance evolving over the next decade and what frameworks or standards will be essential for building trust in autonomous systems?
Nadav Cornberg: I could see a lot of parallels between what’s happening today in the physical world and to the digital world. They’re gonna be more concepts of when agents want to operate, how are they gonna request additional permissions or escalate their permissions. There’s gonna be an aspect of dividing who is actually doing the work, to who’s approving the work.
Gonna be, as I mentioned, when we’re looking at the physical task force, there are gonna be a lot of paradigms that we’re using there that will have to be applied to agents. The way that today things are operating are very open. The way that communication can happen between agents, the way that they can take actions, they will have to be frameworks in place that will allow the escalation of.
Requests for critical tasks to be done, to be monitored and governed in such a way that allows business continuity, but checks all the boxes. And I think today we are definitely driving business. We’re taking what I, what people say, calculated risks from a security perspective when that cataclysmic event happens where there’s a big breach.
I think that’s gonna start bringing more standards and frameworks into place to prevent that. But the agen workforce is not going away, and we will need to continue to improve that to make sure that at scale the business is not exposed to any risk of being to, of being able to operate and serve their cu the business’s customers.
Without thinking about, well, what was gonna happen to, how is an agent gonna go rogue tomorrow? So, to the point of, as I mentioned, the frameworks and the standards are gonna evolve, definitely when it comes from the ability to approve critical tasks, that the impact of them being executed could be extremely harmful to the organization. There will need to be a framework for that.
Brian Thomas: Thank you. You talked about the parallels in the physical and the digital world. Obviously there’s many paradigms in the physical world in that work that will need to be applied in the digital world. You talked about that a little bit there, but the standards and frameworks must be foundational.
Of course. We’ll need to evolve and improve those over time as we introduce more and more of these AI agents into play. And I think that’s really important. And again, I you highlighted some of the. The things that AI agents can do today, but again, certain critical decisions, certain standards and frameworks must be in place in order to work, to have this work as, as best as we can in this ever changing secure cyber world we live in.
So I really appreciate that and Nadav it was such a pleasure having you on today and I look forward to speaking with you real soon.
Nadav Cornberg: Thank you so much for having me.
Brian Thomas: Bye for now.
Nadav Cornberg Podcast Transcript. Listen to the audio on the guest’s Podcast Page.











