Leo Feinberg Podcast Transcript
Leo Feinberg joins host Brian Thomas on The Digital Executive Podcast.
Welcome to Coruzant Technologies, home of the Digital Executive Podcast.
Brian Thomas: Welcome to The Digital Executive. Today’s guest is Leo Feinberg. Leo Feinberg is the co-founder and CEO of Verax, a groundbreaking company delivering enterprise grade trust solutions for generative AI. Founded in 2023, Verax has offices in Dallas, London, and Tel Aviv. providing organizations with real time insights into their large language models.
The company’s innovative platform helps businesses monitor, fix, and optimize LLM behavior in production, ensuring safety and reliability without requiring complex configurations or human intervention. Under Feinberg’s leadership, Verax recently raised 7. 6 million in seed funding to accelerate its mission of building trust in AI systems.
With over two decades of experience in technology and cloud services. Fineberg is a seasoned entrepreneur and thought leader. He co founded CloudEndure, which was acquired by AWS, where he served as head of disaster recovery and mitigation strategy. His career also includes roles at Land Light Networks, Acela Web, DigiCache, and the Israeli Military Intelligence Unit 8200.
Well, good afternoon, Leo. Welcome to the show.
Leo Feinberg: Thank you so much for having me.
Brian Thomas: Absolutely. Love doing this. Appreciate you making the time out of London. Love getting up early to make different time zone changes, which is totally awesome. 52 countries now and growing. So, Leo, I’m going to jump right into your first question.
You’ve had an impressive journey from co founding CloudEndure to leading Verax. What inspired you to transition into the trust and safety challenges of generative AI, and how did your earlier experiences shape this move?
Leo Feinberg: So my DNA, since the very first company I started, was around helping big companies and enterprises to adopt innovative technologies.
And the first company I started, it was Web, back when it was still in its early days. Then in CloudEndure, it was naturally cloud, and now in Verax, it’s generative AI. So technologies themselves are very, very different, but it’s the same type of help. It’s enterprises who face a very new category, and they need assistance and guidance about how to embrace that category and make it work for them.
Brian Thomas: Thank you. And I appreciate the background. You’ve done a lot of different things over your tenure, obviously helping companies being more innovative, adopt newer technologies. And this is exciting time, I think, in the AI space. So I appreciate that share. Leo, Verax provides enterprise grade trust solutions for large language models without complex configurations.
Can you explain how your platform monitors and optimizes LLM behavior in real time to ensure safety and reliability?
Leo Feinberg: Absolutely. When we explain it, we usually compare it to the very standard process of peers reviewing their peers work inside of the company. Often when someone writes a document or a piece of code or creates any other deliverable, they would ask one of their peers or maybe their manager to review that before they release it.
And that’s a part of the process is very, very important because even when the reviewer is much less of an expert in that topic than the person who created the deliverable, still the ability to have someone look at the deliverable with a set of fresh eyes. Someone who still comes from the industry and understands what’s going on.
It is a very valuable contribution. What we’re doing at Verax, we are doing something very similar, but both the entity that creates the deliverables in our case, as well as the reviewer. are both pieces of software. The entity would be an LLM or an LLM based solution. And the reviewer is our product. And because it is software, the time frame for this entire process that I’ve mentioned is measured in seconds at most and hundreds of milliseconds at least.
So you could do that on the fly, automatically, but the principle is very simple.
Brian Thomas: Talk about speeding up a QA process, right? Peer reviewed work is so important, and sometimes there are delays around that just based on resources, not, uh, let alone the way, the pace that humans work, but I really am excited to hear about your platform that will be able to do a lot of this work in literally fractions, hundreds of fractions of time.
So Leo, could you share an example or case study where Verax Solutions made a significant impact on optimizing and securing LLM behavior in a production environment.
Leo Feinberg: Absolutely. So one example that comes to mind is I am a customer who is a financial organization and they have a very, they’ve had actually a very significant customer support organization and they’ve had thousands of support queries every single day.
So it was a very important part of their company. Very, very important stores, their customers, and a lot of resources were spent into that piece of the company. When LLMs started getting into mainstream, they considered optimizing that part by having an LLM. Do the T1 support activities for their customers and leave the humans to do the higher tier queries.
And they went through a very complex process to implement that. And at the end of that process. They were very happy because they managed to improve all of the standard metrics for customer support, time to add to first answer time to ticket resolution, naturally cost of the ticket resolution went down.
So they were very, very happy. And they saw that tickets. are being closed right and left and the overall spend for their company on this, on the customer support subject was significantly less. However, after they implemented that change, they realized that it may not all be great news for them because tickets may be closed, not necessarily because they were successfully resolved, but maybe because end users gave up or the solution wasn’t good enough, or maybe because they realized that they were handled by LLMs and not human beings and they didn’t like it.
So they understood that they missed that transparency, that they had much more of it when everything was handled by humans, and that they don’t really know how well LLMs handled that first tier support activity. They only knew that it did it faster and for a fraction of the cost. And even if they did understand anecdotally some issues here and there, the sheer volume of tickets every single day really didn’t allow them to understand what the main issues are, the main gaps in how well LM behaves, if there were any.
And this is where they spoke to us. Because our, the ability of our product is first of all, to analyze all the interaction between an LLM and all its end users and provide the analysis of this interaction to the company. And second of all, whenever an interaction is problematic, our product would understand that on the fly and then fix the response of the LLM on the fly as well.
So they not only they understood the overall quality of the responses of their LLM in those situations, but also they found several very significant gaps. And lastly, Verax automatically fixed. Most of these gaps for them out of the box. So they didn’t have to go back to their vendor because they didn’t build the solution themselves.
They bought an off the shelf and LLM based customer support solution from a third party vendor. So they didn’t need to go back to the vendor and begging that vendor to improve the product, or maybe switch into a different product. With the Verax solution, they basically got a better product immediately out of the box just by deploying barracks on top of that automotive project.
Brian Thomas: That is amazing. I liked how you highlighted implementing this eliminated a lot of time and money, especially level one support, but it provided the company to really delve in more because they didn’t have that time before to see really what are the reasons customers where the major problems lie with the customers.
I appreciate that. But the fact that you can. Take Verax and augment also on another platform is simply amazing. So I appreciate the insights and the share on that. And the last question I have for you today, Leo, if you could, is with your extensive background, including roles at AWS and in the Israeli military intelligence, what emerging trends in AI do you believe will most influence the future of trust solutions in the enterprise space?
Leo Feinberg: There are several components to my perspective on that. First of all, I think that in the enterprise space, we are still in the very early days. of AI adoption. We’re still seeing enterprises and being reluctant adopting AI and LLM for the more useful M situations. They are still doing a lot of POCs in most enterprises, quite a lot of activity, but very, very few of those POCs.
Actually ended up being run in production and even the ones who did, they are usually a part of the low risk, low reward category. So they’re doing something, trying it in a specific niche, usually internally in the company, not necessarily customer facing. So, which is great in terms of risk management.
But then it’s not so great demonstrating the value of a I for the organization and we at Verax, we believe that a I has the potential to transform almost all aspects of how companies work for the better. I think that any single department that you take in a company can be significantly improved in the future, theoretically, with the use of AI.
So I think first of all, the more extensive the use of AI is in companies. And the more business critical use cases it covers, the more trust it will require. And this, we’re seeing that all the time when an enterprise customer makes several steps on their journey towards loving generative in general, we see how they want to implement AI more widely and in places that would benefit the company more, but they need to trust AI more to bring themselves to do that and for it to make sense from a risk management perspective.
And this is something that we are helping them with and we are helping companies that have spent a lot of time and a lot of money on building solutions for various use cases, but not trusting these solutions enough to take them to production. We are helping them to overcome trust issues they have and enable them to move to this next step, which is very, very important.
This journey of AI adoption in enterprises is to need more and more trust related solutions. We’re seeing that very clearly. So that’s the first half of what we’re seeing. I think the second half is how enterprises move away from AI as an end goal to AI as a means to an end. And then at the day enterprises needs solutions to their business cater and whether those solutions are or are not AI powered, it’s not that important.
Naturally solutions that are AI powered can achieve much more. This is their strength. But I think that when a chart GPT first came out. and AI started becoming very, very popular, there was a perception that AI by itself is going to solve every single challenge out there eventually. I think now people start, or not even start, they have already started understanding that AI is just one component and you still need A solution on top of that with many other components that may be as challenging.
And at the end of the day, that solution needs to successfully solve a business problem rather than just be cool because it has AI.
Brian Thomas: Thank you. I think that’s really, really important. We do have people watch the news. They see some. Negative things around AI potentially, and people just don’t understand or they haven’t been trained or introduced, but to get to a place of trust, as you mentioned in your in your first part, there is building that trust is to start walking forward, leaning into that adoption of AI.
I think that is so, so important. And realizing that AI can be a total game changer for your business if it’s implemented correctly. And again, we’re still going to need humans and that human machine interaction, that team to be able to successfully handle the business day to day operations and some of those projects.
So I appreciate your share on that Leo. And Leo, it was certainly a pleasure having you on today. And I look forward to speaking with you real soon.
Leo Feinberg: Thank you so much for having me and thank you for taking the time. I really, really appreciate it.
Brian Thomas: Bye for now.
Leo Feinberg Podcast Transcript. Listen to the audio on the guest’s Podcast Page.