Yakir Golan Podcast Transcript
Yakir Golan joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies Home of The Digital Executive Podcast.
Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand.
Welcome to the digital executive. Today’s guest is Yakir Golan. Yakir Golan is the CEO and Co-founder of Kovrr, a global leader in cyber and AI risk quantification.
He began his career in the Israeli intelligence forces and later gained multidisciplinary experience in software and hardware design development and product management. Drawing on that background, he now works closely with CISOs, chief data Officers and other business leaders to strengthen how organizations understand and manage both cyber and AI risk at the enterprise level.
Yakir holds a bachelor’s in electrical engineering from the Technion Israel Institute of Technology and an MBA from IE Business School in Madrid. Well, good afternoon, Yakir. Welcome to the show.
Yakir Golan: Thank you, Brian. Pleasure to be here.
Brian Thomas: Awesome. I appreciate it my friend. And I know you’re hailing out of Israel right now.
I’m in Kansas City, so we’ve got about eight hours between us. But I appreciate you, uh, making the time to do this. Um, Yakir let’s jump into your first question. You began your career in the Israeli intelligence forces and then moved into software and hardware and product roles before founding cover. How did you, how did that background shape your thinking about risk systems and what enterprise organizations truly need when it comes to cyber and AI exposure?
Yakir Golan: Yeah, that, that experience in the intelligence forces really shaped how I think about risk and systems. Uh, you’re trying to look at how small signals connect to larger patterns and how one detail on its own might not mean much, but together they all. Can tell a story that per perspective has stayed with me, uh, and, and guided me.
It’s the same way I see cyber and ai, AI risk today, dynamic, interconnected, and constantly evolving. Of course, you can’t look at any single, single event in isolation. You have to understand how it fits into the broader ecosystem. During that time, I also saw how much valuable data exists. In the hidden layers of the internet, what’s called particularly on the dark web, and how little of it was actually reaching the organizations being, uh, targeted.
That gap between available intelligence and accessible, uh, Insightly really stuck with me and when I move into software and hardware later in my, uh, product roles. I became focused on how to bridge that, uh, divide, how to take raw, fast changing data and turn it into structured models that could ate real world behavior and quantify exposure.
Whether that means modeling a cyber incident or AI system failure. The goal was to make something ab abstract and measurable. Over time, I noticed another gap emerging in enterprises. Which security team had a lot of technical data, but leadership were struggling. They didn’t have a clear way to translate into a business context to understand what an exploited vulnerability or a flowed AI model might actually mean financially, uh, operationally, business wise.
Um, and most risk management approaches were still very much static, built on assumptions rather than real intelligence data. I want to bring that discipline of continuous data collection, modeling and validation that I learned early on into enterprise risk management. And that idea became the, the foundation actually to cover when we started.
And from the start, our goal has been to democratize the access to real time risk intelligence and apply to cyber first, and now to AI exposure. And we wanted to give organizations the same level of situational awareness. Quantifiable insights that intelligence agencies rely on, but also in a way that’s practical for business leaders.
And that’s, uh, the driving force of cover helping enterprises may manage cyber and high risk. Proactively with evidence and clarity instead of re reacting to the after effect.
Brian Thomas: Thank you. Really appreciate that. And I think it’s important, um, you know, both of us served in the military and I think it served us well.
We learned a lot and I took away that your learning perspective, uh, from your military experience. Really made a big impact. And you took that with you when you went into the civilian world. Indeed. I like the goal here. You, you were trying to make everything measurable in all the work you were doing with that continuous data collection, while always looking at ways to lower that risk.
Uh, the cyber risk, the AI risk, uh, and I think that’s important. So thank you for sharing. And Yakir, Kovrr recently launched. AI Risk assessment, I’m sorry. AI Risk assessment and AI Risk Quantification module, which gives organizations a way to model potential loss and build visibility around AI risk. Why do you believe moving from qualitative risk claims, you know, we’re concerned about AI risk to quantified metrics is a game changer.
And what are the biggest barriers organizations face in making that transition?
Yakir Golan: So when, when you quantify risk, it stops feeling abstract. Suddenly it turns into, uh, numbers. People can talk about, engineers can talk about it. GRC people, leaders can talk about it. The board can discuss and talk about, we all see the same picture and we can make decision without guesswork or, or subjectivity.
Numbers give everyone a shared language. Then AI risk more moves from theory to something you can measure, compare, and act upon. You can rec rank exposure. You can see which controls are actually moving the needle in direct budget to the places with the highest return. That’s your language addresses the real problem because for years, risk conversations lived in subjective scoring, red, yellow, green, high, medium, low.
Each team had a different definition, which made the alignment tough and basically slow decisions, which is, uh, uh, uh, the toughest, uh, outcome. Quantification flips that it connects cyber and AI exposure to financial, operational outcomes. The same way you evaluate any other business risk. It creates one language across security, GRC, finance and the leadership of the organization.
Why isn’t everyone doing it today? Well, first, it, it feels new. Iris today reminds me of cyber a couple of decades ago. People sense it matters, but it still sound niche or academic. Meanwhile, the, the reality is that AI is already, uh, flatted through day-to-day operations, which means the stakes are real.
And second, adoption is outpacing governance. Teams are ruling out gen AI features and pilots quickly, policies, controls and oversight are still catching up, and in that environment, the starting point can be, can feel fuzzy, even, even overwhelming. Well, and the good news I have to say is, is you do not need to start from zero.
You can use the same governance playbook that works in cyber, begin with a structured control assessment to establish visibility. With ai, that typically means using the AI RMF or the ISO 42 0 0 1 frameworks, and from there you build progressively track maturity over time. Identify gaps in owners, add identification as your daytime intelligence advance, and treat these as interactive practice rather than one time a project.
When organizations take that path, two things happen fast. One alignment improves because everyone is looking at the same measures. Two investment decision get better because you can show which actions reduce expected loss. And this is how AI risk moves from a theoretical worry to a manage the business issue.
Brian Thomas: Thank you, I appreciate that. And just highlight a couple things that I thought were important. Uh, obviously quantifying risk takes the abstract out of the conversation everybody can understand and ultimately you’re lowering risk. It also, um, creates that correlation, uh, with quantification, uh, the correlation between cyber AI risk to the financial aspect of the business, and I think that’s really important.
Uh, Yakir covers quantitative models account for rare high impact events. Incident types across AI vectors. What are some of the tail risks in AI that you believe are underappreciated today? And what should enterprises begin modeling now to avoid surprises in the future?
Yakir Golan: When we talk about tail risks in ai, we we really talking about those rare high impact events that can cause outside damage.
Things like large scale model manipulation, data poisoning in, in training sets, or even a systematic outage. To measure AI service provider, but honestly, what matter most isn’t identifying every possible scenario, it’s how you start modeling them. A lot of organizations assume they need massive data sets or advanced analytics to quantify risk, but that that’s not true.
You can start small. You begin with one or two clear, high priority scenarios that everyone can understand. Technical and not technical stakeholders alike. Maybe it’s an AI model that fails in a critical business process. Maybe it’s a third party AI tool that accidentally exposes customer data. The goal isn’t to capture everything at once.
It’s to build the first directional view, sometimes tangible enough to begin shaping mitigation plans, funding, conversation, and with strategy. Once that foundation is there, you can start layering more granular metrics like average annual loss per scenario, high severity probability, downtime, duration, and even the financial impact of a bias or misinformation driven outputs.
And over time, that evolving picture helps organization understand not just what could go wrong, but how much risk they’re willing to tolerate. That’s also the point where risk appetite becomes measurable. Board can start setting thresholds. For example, uh, deciding that the no AI related event with more than 5% chance of exceeding a $5 million loss should be accepted.
That kind of clarity isn’t possible without quantification. It chiefs AI governance from being purely policy based to being performance based, where you can actually measure how resilient you are. Just like with cyber, it’s not a one-time exercise. It’s iterative. Uh, each modeling cycle builds confidence, improves accuracy, and it helps leaders stay ahead of those high impact AI events instead of being, uh, surprised by that.
Brian Thomas: Thank you. I appreciate that. And your quantitative model obviously helps, uh, senior leadership and boards understand and prepare. For these risks. Uh, you know, you mentioned a, a few of them, obviously there’s data poisoning, there’s outages, downtimes. But what this does is it helps, uh, prepare for that mitigation, uh, planning, maybe budget, you know, downtime planning, that sort of thing.
And I think that’s important that people are ahead of the game because this stuff is inevitable. Mm-hmm. And yeah, Yakir the last question of the day. If you’re looking ahead five or 10 years, how do you envision the role of AI risk quantification, evolving? With regulation like in Europe, the EU AI Act, new business models and more automation across operations, what will good risk management look like in the future?
And how will Kovrr’s vision support it?
Yakir Golan: We’ve actually seen this, uh, evolving before with cyber. So in the early days, risk quantification was what finally bridged the communication gap between technical teams, executives, and regulators. It gave everyone a common language. Now the same transformation is starting to happen with AI boards.
Investors and regulators are no longer satisfied with qualitative statements like we are monitoring AI risk. The one measurable evidence and clear demonstration of exposure controls, uh, and, and the potential financial impact, regulatory pressure is accelerating this shift. The, the EU AI Act makes senior management explicitly responsible for AI oversight.
That level of accountability means leaders have to understand risk in measurable, defensible terms, and that’s exactly what quantification provides in the us. For example, The SEC Cyber Disclosure Rules already said that resilient for defining and reporting material risk. The same principles come to ai.
Companies will soon have to quantify what constitutes a material AI event and show how they’re mitigating it in Europe. European Central Bank is taking this even further. Banks are now being asked to model how major disruptions, uh, like geopolitical shock, cyber incident systemic failures, would affect their capital reserves.
That’s financial risk modeling, becoming regulatory expectation, not an optional exercise. It’s only a matter of time before AI related scenarios are part of that list. So what does that mean for the future of AI risk management? In my perspective, the, the direction is clear. Quantification will move from a, from a best practice to a regulatory investor expectation five and 10 years from now.
Good AI risk management will mean being able to answer free questions clearly. What are our AI related exposures? How much could they cost under realistic scenarios? And what action most effectively reduce the exposure within our defined risk appetite? That’s where it Kovrr’s vision fits in our focus has always been on bringing financial discipline to technology risk.
That’s as we did with cyber. We’re building the models that let organizations express AI exposure in business and capital terms. The goal is simple to give leadership a quantified view that supports regulatory readiness, investors transparency and operational resilience. AI become, become, sorry, a managed source of value that is source of uncertainty and unpredictable loss.
Brian Thomas: Thank you. I appreciate you unpacking that for us. Uh, absolutely. Leaders, boards, uh, investors are wanting this quantitative level of data so that it is measurable and how do they, or how do we mitigate, uh, future AI risks. You know, financial risk monitoring is, is now a requirement, as you mentioned, and it’s, it’s so important.
And when you, again, quantify things and speak a shared language and tie that, you know, that correlation between, um, the risk to the, uh, financials, financials, uh, in the organization, uh, you know, people certainly pay attention. So I appreciate that, and Yakir, it was such a pleasure having you on today and I look forward to speaking with you real soon.
Yakir Golan: Brian, thank you so much. It’s been a pleasure participating, and uh, I’m looking forward to, to keep listening to the next series.
Brian Thomas: Bye for now.
Yakir Golan Podcast Transcript. Listen to the audio on the guest’s Podcast Page.











