Yori Lavi Podcast Transcript

85
Headshot of CTO Yori Lavi

Yori Lavi Podcast Transcript

Yori Lavi joins host Brian Thomas on The Digital Executive Podcast.

Welcome to Coruzant Technologies, Home of The Digital Executive Podcast.

Brian Thomas: Welcome to The Digital Executive. Today’s guest is Yori Lavi. Yori Lavi, the field CTO of SQream, a leading provider of scalable GPU, accelerated data analytics software for large data sets and AI/ML workloads, boasts over 25 years of experience as a visionary and operator in the high-tech industry with a rich background, encompassing managing partnerships, founding startups, and advising boards.

Yori has been instrumental in driving the development and implementation of cutting-edge technologies across diverse sectors, from enterprise software to IOT, having founded four companies and spearheaded products generating over 500 million, including serving as chief architect of a 300 million line of business and overseeing products with over 80 million users.

Yori is renowned for his knack for innovation and strategic growth. Evident in both fortune 500 companies and startups alike.

Well, good afternoon, Yori. Welcome to the show!

Yori Lavi: Thank you very much. Really happy to be here.

Brian Thomas: Thank you. Absolutely. This is so fun. I appreciate it. And you’re making sometime today hailing out of the great state of New York.

So, I appreciate that. I’m in Kansas City, so it’s not so much of a time zone change. Normally, I’m doing podcasts around the world, but appreciate that. And Yuri, we’re going to jump right into your first question. With over 25 years in the high-tech industry, you’ve founded startups. Driven product development and been a key player in M and a transaction.

How do these experiences shape your vision as the field CTO of SQream? And what drives your passion for technology and innovation?

Yori Lavi: Okay. So, I believe engineers can develop anything you ask them, but it’s much more fulfilling to develop something that has an impact, so. I think that figuring out what makes an impact and then figuring where it can fit in is what drives me.

So, I’m looking for where the technologies are mature enough, the market is mature enough that they actually have an impact. They’re not just a novelty, but they are young enough. Or with the ability to change them. So new technology that I can bring to market can actually increase the impact and impact at the end is on business, on people’s life.

It’s the idea that it’s more than technology exercise. And the thing that drives me today is you know, with almost three decades, I can tell you that the need for people to have actionable insight in time, hasn’t changed. I’ve been in with data for at least three decades, and this was one of the first things I worked on.

And this drives the industry today that 2 trillion NVIDIA market cap is exactly that people want to have insight while it still matters. How do you actually do that? That’s so it’s a nice place to be.

Brian Thomas: Thank you. I appreciate you sharing that. And some of the backstory in your engineering background, of course, has made a lot of contributions in this industry.

You’ve got really very illustrious resume there appreciate your again, story. And Yuri, Scream stands out for its GPU accelerated data analytics software. Could you explain the core technology behind Scream and why GPU acceleration is a game changer for big data analytics and AI workloads?

Yori Lavi: Sure. So, think about GPU. GPU had kind of a humble beginning where you used punk on a, or Tetris on a, on a computer. And today used GPU to have a real live animation lifelike a few decades ago. If you want to build a supercomputer, it was tens of millions or more to do this. Now you can have this on in your data center at a fraction of the cost, and it’ll be much more.

So, GPU basically enables you to process data, numerical data, granted, but data a thousand times even more than what a CPU can do today. The ability to if you look at cost, cost performance CPU to GPU, it used to be considered as a. 40 50 to 1 the estimation today is close to 200 300 to 1 GPU processing compared to CPU is much faster.

Okay, that’s one aspect. The other aspect is you look at most of computing today. It’s done in the data center by its name. Most of the things they do in data centers are data processing. But you use this with CPU, it’s extremely inefficient. So, the founders thought, okay, what if we take this amazing extra capability and just refocus it to deal with data.

We deal with data much more than we deal with the compute of ML/AI. And that’s what they did, so. The idea is that a screen can process data in parallel in a normal GPU, you’ll have even the smaller ones, 10,000 cores to 50,000 cores compared to the machine they’re running on, which will have 16 to 100 cores, these tens of thousands compared to hundreds GPU to CPU is what we do.

We take how you process data; we make it so we can utilize the GPU and then we look at the data journey and any place in the data journey from disk all the way to being shared with somebody, we make sure we do this in a way that’s parallelized, no bottlenecks. And we can just send as much data as we can throughput into the GPU and let the GPU work.

No bottlenecks, real scalable, which means, and because we’re doing a GPU, the cost performance is usually a 10 to 1 to what you would get with a, with a CPU. What do we want to do with that? And why is it a game changer? Okay, so raw data has zero value to an organization. Raw data that went through a lens of machine learning model suddenly have an insight.

That’s what people want, that insight. Now, if you look at that, you need GPU to do the training. But, for the most part, the part that takes the most is what something called data prep. If you look at a normal data machine learning model, it takes about three months to do this. 80 percent of this is the data prep screen kind of compresses this to with over 90%.

So, let’s have one example. What the company really want is to have a better model, better algorithm than what they have today in production. And it takes about three months to develop one and usually go through four or five iterations until you have something that’s better than the incumbent. So, they spend about a year of data scientist and computing and storage in order to do that.

SQream slow enable the data sign to work by himself, increase the productivity, do this much faster. So, it takes one fifth of the time. So instead of spending A full team for a year, you can get the information in a few weeks to a few to two months. That’s a major difference. If you can do that, we have a customer that has increased their productivity.

It’s a, it’s a manufacturing plant from 50 percent yield to 90. Almost double what they get out of the plant, with zero change to the infrastructure, just software. That’s the difference. Does this make sense to you?

Brian Thomas: Absolutely. Love that. And I know GPU is very powerful today. We utilize it in a lot of different applications, but you really are changing the game as far as data analytics and, you know, I’m sure large language models ai ML workloads are certainly going to improve because of this.

I appreciate the examples. That’s amazing. And you’re, you’ve been instrumental in addressing extreme data challenges. What are some of the most common hurdles companies face with big data analytics today and how does Scream help overcome those obstacles?

Yori Lavi: So, I think that there are two mundane problems that because they are not solved nicely, they are developing to a major organizational issue.

So, slowness of development and the fact that data science require data engineers to different guys report to two different people that kind of create slowness. So, taking raw data to something you can use to where data project actually makes sense. You can everybody wants to be data driven decision makers.

But in order to do this, you need the algorithm, you need the machine learning, you need the gen AI. And the problem today is that that process is manual, you need people to come up with this model. And it’s slow, so it’s not living up to the hype. So, it takes longer, so it’s more expensive. So, we are at a very weird situation right now, that if you are in a mid-size or large company, you have data, you are aware you want to use this to have a better algorithm.

You have the budget because everybody agrees it’s critical. So, you still have to wait a quarter or two before somebody can start working on your project. That backlog actually slows down all the, all the potential benefit of what people can do. And it starts with, it just takes a really long time to develop this model because it’s, it’s kind of manual.

There are a lot of ways people try automating this. We’re only at the beginning there. So what Scream is trying to do is address this. If you develop faster, if the data scientist is five times more productive, if I can have if I add one data scientist and it increases the productivity of everyone else synergy, then I can have larger team and I can actually have them with zero bottlenecks.

You have data, you have budget. Hey, I have a team that can do this. I didn’t need to increase its size. So, it’s a mundane problem. It’s slowness, it’s technologies that used to work a decade ago and because the data is much bigger don’t work today. And hey, you want it cheap, you want it not necessarily cheaper but fits within your budget.

So, it’s a mundane problem that translates to we’re using only a fraction of what we could if we could just come up with better models faster.

Brian Thomas: Does this answer the question? Absolutely. I really love that. And the fact that you can hit the ground running using your platform makes it such a just really valid business case to get started right away.

Especially as you said, companies that do have budgets do appreciate that. Your last question of the day. You could briefly share your recent focus has been on big data analytics and machine learning. How do you see these domains evolving in the next few years? And what role will screens play in shaping the future of these technologies?

Yori Lavi: So, the 1st is kind of we are. Riding away where even midsize companies can generate more data than they are equipped to actually use. So the techniques that used to say, Hey, we’ll do filtering, we’ll do cubing, we’ll do aggregation, we’ll do all kind of things that will enable us to query faster.

They stopped working. Then, the scenario today is that people won’t use large data sets. The data set keeps changing frequently because you have now a technology and a culture of let’s get the latest data. And the last is people actually want to get the insight while it’s still relevant. Large data, fast changing, and you actually care about to get the insight, basically process all the new data with a historical one and get this in near real time.

So. These three Venn, the combination of this Venn diagrams. Basically, lands in, it breaks most of the existing technologies, all the techniques are broken, and people call this a zero-sum game. Okay, tell me which pain you are willing to, to endure. Not use all the data, use data, use inference or use insight that’s a day old or much older than you could.

Spend more money get information slower, work on smaller amount of data. It’s a zero-sum game. Pick one and I can, and I can address, but you’re going to suffer on all the others. And we think the GPU, because the cost performance is so much better than the historical one, the incumbent, the CPU one, can break this zero-sum game.

So, more companies cross into the territory where older techniques stop working, but GPU can actually help them get there. That’s one aspect. The other is Gen AI. The biggest thing about Gen AI is that you can have An intelligent conversation with a machine that’s a subject matter expert on the domain you want to work on.

That’s amazing. The limitation today is that Gen AI requires data with context. A document, a programming language that has context. You can analyze and understand. But if you look at most enterprise data, It’s structured. If you look at an ERP system, you’ll have 70, 000 tables with hundreds of thousands of columns.

I guarantee there is, the names of the columns are meaningless. And most of what they had are, are numbers or categorical data, the name of a street. Okay, it’s meaningless. You need somehow to take this huge amount of data, create some context to it, so you can feed this into the Gen AI. Okay, bye. Gen AI today for an enterprise deal just with unstructured textual data, but 90 95 percent of the information, especially all the operational one, is structured, so they can’t use that.

That’s a huge gap between what Gen AI can do. Can process and what the customer wants to do if you in your organization could have an intelligent discussion with a machine about something that happened today in your, in your organization. That would be amazing. And every manager, every manager, different level could have that, but.

The idea is that they actually need screens to accelerate the development of machine learning models on top of raw data so you can have context and feed this gen AI and close this gap. So, more companies need more processing. That’s in general. And Gen AI has this huge blind spot of being unable to deal directly.

You can’t just feed numerical data into it, and it makes sense. It won’t make sense about that.

Brian Thomas: Does this answer the question? Absolutely. Thank you, Yuri. I appreciate you really unpacking all that for our audience. That this is huge, by the way, and you know that age old saying, if you can pick two out of three items, it can be good and fast, but not cheap, or it could be cheap and fast, but not good.

But it almost sounds like you’re at the brink of making all three possible in this world that we live in with your platform. So, I really do appreciate that. And Yuri, it was such a pleasure having you on today, and I look forward to speaking with you real soon.

Yori Lavi: I appreciate the opportunity and would like to do this again when possible. So, thank you.

Brian Thomas: Bye for now.

Yor Lavi Podcast Transcript. Listen to the audio on the guest’s podcast page.

Subscribe

* indicates required