Brad Micklea Podcast Transcript

312
Headshot of Founder Brad Micklea

Brad Micklea Podcast Transcript

Brad Micklea joins host Brian Thomas on The Digital Executive Podcast.

Welcome to Coruzant Technologies, Home of The Digital Executive podcast.

Brian Thomas: Welcome to The Digital Executive. Today’s guest is Brad Micklea. Brad Micklea is the Founder and CEO of Jozu and a project lead for the open source Kitops.ml Project, a tool set designed to increase the speed and safety of building, testing, and managing AI ML models in production.

This is Brad’s second startup. His first was CodeEnvy, the market’s first container-based developer environment, which was sold to Red Hat in 2017. In his 25-year career in the developer tools and devops software market, He’s been the GM for Amazon’s API gateway and built open and closed source products that have been leaders in Gartner Magic Quadrants.

In his free time, he enjoys cycling, reading, and vintage cars.

Well, good afternoon, Brad. Welcome to the show!

Brad Micklea: Thank you very much for having me, Brian!

Brian Thomas: Absolutely. I appreciate you jumping on a podcast, getting the day started out right out of Canada of all places, right? Normally I traverse some of the continental United States, the 48 lower half, I guess as they call it, but appreciate you jumping on hailing out of the great space of Toronto there.

So, Brad jumping into the first question we have for you today is. Starting with CodeEnvy and moving through your career to Jozu and KitOps.ml, how have your past experiences shaped your current ventures and what lessons from CodeEnvy have been most influential in your approach to leading Jozu and developing KitOps?

Brad Micklea: Ah, great question. Okay. Yeah. So let me give a little background first. CodeEnvy was my last startup I worked with Tyler Jewell, who’s now the CEO of Lightbend on it. We kind of worked on that, on that together. It was the first web IDE that could do compiled languages and ran the developer environment inside containers.

And this was back in 2015 basically. And so, the containers were really, really new. And honestly, even Docker at the time had trouble keeping their Docker swarm environments kind of stable. It was really new technology. It was not really production ready quite at that time. And I’m sure somebody’s going to throw something at me for saying that.

But we took a big risk and just said, you know what? We think this is the future. It’s clearly not ready yet, but let’s go all in now. And let’s see if we can get ahead. And honestly, the FUD from our competitors was really easy. They were just like, Hey, Mr. Customer, if you can get a container running and you have, you know, and you can keep it stable, then you should totally go with these CodeMD guys.

But if not, we use VMs and that runs your whole data center. So maybe just trust us. And that was tough. That was tough for the first, you know, year or so. But then containers did take off and it became something that everybody was excited about. Everybody realized it was much simpler, much easier. You know, I wanted to be part of that.

And by the time that that happened, we were so far ahead in our understanding of containers versus our competitors and had so far so many more capabilities around it that we were probably easily a year, maybe two years ahead technically of where our competitors were going. So, they started to move containers.

We were already way into that. And that was huge for us. That really helped us get to a point where, you know, Red Hat was interested in acquiring us and ultimately did acquire us. We got a very healthy multiple at the time. And I think we’re, when it happened, I think we were probably the largest WebID acquisition in history at that point.

So that was fantastic. And so that’s really has shaped your question, Brian, how I think about startups. When I think about startups, I think that there’s this little window and it’s tough to get right, but you gotta be ahead of where everybody thinks is sane, but not so far ahead that you’re actually insane.

Well, is that like a little part in the middle where you want to be? And that’s how we’ve thought about Josu. Today, if you look at the world of AI, I mean, it’s on fire. Everybody is talking about AI people, you know, at the grocery store, talk about AI, which is not common for most tech trends. But It is so new that very, very few people, very, very few companies have yet had a chance to really understand and put models into production facing users being used daily.

That’s still quite rare. Now, we believe that that is, much like containers, going to take off, that ultimately everybody is going to build an AI capability into most of the things that you interact with as a user. And what we also know from containers, and a number of other technologies, is just that developing something is a lot easier than actually operationalizing it, than running it in production safely, securely, scalably.

And so, with Jozu, we’re really focused on that second problem. And we know. Very, very few people are ready really for what we’ve started to build today, but that’s okay because everybody is going to need it. in a year, year and a half. And so that’s, that’s really been it. The building of the portfolio of tools that are going to help people tomorrow as they hit these pains, because these pains are going to be ugly security risks.

People think, you know, applications falling over customers, getting upset. You don’t want to have those things. You want to have a tool set that, you know, has been kind of readied beforehand. So, you can avoid those problems.

Brian Thomas: Thank you. I appreciate the backstory really do. You took a big risk.

Obviously, working in that container orchestration space, right? Yeah, you know, it’s, it certainly was something that you take a risk. But again, the bigger the risk, the higher the rewards and I’m, I’m very. Pleased to hear the outcome of that and red hat purchasing code envy. So, I appreciate that.

And Brad jumping into the next question. With Jozu being the latest venture, what gap did you see in the market that led to its inception? Can you share more about Jozu’s mission and the problem it aims to solve within the tech ecosystem?

Brad Micklea: Yeah. So, when you’re looking at the AI market you know, there’s things like open AI and Google and Anthropic that are building these massive, massive large language models or LLMs.

and kind of have these chat bots. And that’s something that’s very easy for people to kind of play with. And you can, if you’re going to build kind of your own solution, you can go in that direction and, and kind of work with those types of vendors, or you can take models that have been built and open sourced and bring those in house and really kind of tailor them to your needs.

Now, let’s say you go down that second path today’s, there’s a lot of tools that are referred to as MLOps tools machine learning operations, in other words, MLOps. Now the problem with those tools is that despite their name, they’re actually really focused on developing models, training models, experimenting with models, all the things that come before you actually operationalize them.

Now on the operational side, we have a lot of tools and some of your listeners may have heard of DevOps and you know, you have lots of DevOps tools. that are there to help you get your normal, let’s call them applications, your non-AI applications out into production reliably, safely, without risk, within compliance, within security boundaries, all that good stuff.

And that’s great, but those don’t work particularly well for AI for a couple reasons. AI models and datasets are absolutely massive. They’re a hundred to a thousand times larger than what your normal kind of asset going through a pipeline would be. Plus, they’re not always The right solution for deploying them is not always just throw a container around them and ship it out.

Sometimes, that’s right, but not in every case. You also, because of how complex the datasets and the models are for AI, you have a kind of risky situation where it’s trivially easy for somebody to tamper with a dataset or tamper with a model. And extremely difficult for you as a consumer to have or a user of that model or data set to have figured that out.

Again, when you have hundreds of gigabytes of data, how are you going to find the one little bit like that? It’s that needle in a haystack problem. So we really saw that there was this, that there was a lot of focus on the development and that makes sense because that’s what people are really doing a lot of today, but very little on the operation side.

So, we said. Let’s really focus on building the tools that people need, given our experience, AWS, Red Hat, startups in the operationalization side, and let’s make sure that we get all that for AI, get it ready for people. Our first step has been the KitOps open-source project that we built and have contributed back to the community.

And KitOps is a universal packaging and versioning system for AI projects. It uses something called a model kit, and the model kit Packages your model, all your data sets, all the code associated with it, all the configuration into a single package, and it versions that. So that anybody working with that model, because you’re going to have data scientists working with it, you’re going to have application developers working with it, you’re going to have infrastructure engineers working with it, security folks, lots of people need to touch that.

And this way they all know this is the version of the model that was trained with these versions of the data set that needs these configurations. Et cetera, et cetera. And they don’t need to guess. They don’t need to kind of go back and say, Hey, data scientist, which one did you use over here? And which one did you use over there?

That stuff just slows everything down and adds a risk. Now the package can be stored in the repositories that all the enterprises already have today. So it’s not like you need some magical new thing you can just put it where you put all the rest of your containers, which is super convenient because people have built processes around putting things in and getting things out of those registries.

And we want people to be able to take advantage of that. So, this really makes it a lot easier and safer for people to handle these, the, the kind of development process, the deployment process, and ultimately the kind of. care and feeding that will happen in production for all these models. So that’s KitOps and that’s the model kits.

That’s the open source part. We are hoping that other vendors participate, that you know, that other platforms and other tools will adopt this because we see it as just a better way to make the handling of these models safer. more deterministic for everybody. The Jozu part is we’re going to add a hub on top of that, and you can think of it being similar to something like Docker Hub, where you’ll be able to go and discover and find the model kits that people have posted and made available.

Now, some of your audience who are more familiar with AI might be thinking, wait, that sounds a lot like Hugging Face. I can go to Hugging Face and get a bunch of models. I can go to Hugging Face and get a

Our goal really is to be used by enterprises ultimately as a curated library of models and data sets. So, you can go and get the big public things from HuggingFace, and that’s what we do, and that’s what most people do, and that’s fine. But you have very little control or visibility into how things got onto HuggingFace.

Somebody posts them, you don’t know who that is. And somebody else changes them and reposts them and you don’t know who that was or what the change necessarily was. And that’s scary. As an enterprise, that should be scary to you. You know, you’re responsible for security, you’re responsible for compliance, you’re responsible for your customers.

We want JozuHub to instead be something you can deploy internally. Take the things that you trust and say, Hey, here’s my data sets, which I know all about. I don’t want anybody looking at these. These are private. These include customer data that I don’t want to share with anybody. I put that in Jozo Hub inside my walls with the models that I trust inside my walls.

And now I can trace every change, every version. I can diff things. I know exactly where, where things changed. It just makes things a lot safer than if you’re always going out to kind of Unwashed public location and grabbing things. They’re

Brian Thomas: great. Thank you for sharing. I appreciate that. Kind of breaking that down, especially for the techies in our audience.

Really do you like that place that centralized it’s trusted, be more efficient with the process and control that versioning, which I really like as a developer in my past life. So, Brad, next question, how do you envision the future of DevOps integrating with AI operations? And what advancements do you foresee in the next few years that could revolutionize how we build test manage these AI models?

Brad Micklea: Yeah, yeah, great question. So, I mean, first of all, obviously, I don’t have a crystal ball. Nobody does. But certainly when I look at things with 25 years of experience what I see is, and this comes back to what I mentioned earlier, you’ve got these vendors like open AI and Google and others that you have these massive language models that are quite powerful and really quite impressive.

And they have API’s and you can just go and use them. You can basically say, Hey, why don’t I take You know, this question from my customer, and then I’ll just ask OpenAI, I’ll get the response, and I’ll pass it back to my customer. That is a very kind of quick way to solve the problem. And it’s quite easy.

And so that’s very appealing. The way I think about it is, I believe AI is one of these kinds of generational sea change technologies. And when that happens as a business, You have to look at that and say, how do I make that C change kind of payoff for me more than my competitors? How do I compound the value of that so that my business grows twice as fast, three times as fast, ten times as fast as my competitors?

And I use this as a way to either pull out front if I’m already in front or leapfrog a bunch of people who I might not otherwise be able to do. Now, as I think about it, every question I send to an OpenAI or a Google or whatever have you, like I said, super easy to do that. But what I’m really doing is I’m training a model that I don’t know, that my competitors can use just as much as me.

And so how am I going to get a 10x? benefit versus my competitor if we’re both using open AI. I’m not. Maybe I can get lucky and get a super amazing engineer who can come up with a really great way to prompt AI better than my competitors, but that’s what at best may be a 2x improvement. Like maybe, probably it’s some small percentage to really make a difference.

And you think about this on the internet, you know, world you had lots of brick and mortar companies that basically tried to just slap on a little, a little e commerce face. And then you had Amazon that basically said, no, we’re going to go all in on e-commerce. We’re not brick and mortar. We’re just doing that.

And for a long time, they kind of grew slowly and well, but you know, we’re not in danger of overtaking. And now, I mean, the world is a different place because of Amazon. So those changes are coming. If you want to be to me, if you want to be on the right side of history, you want to figure out How do we get our own technology internally?

How do I make my model so much better than my competitors that they can use OpenAI all day and all way and I’m still going to be better? And that doesn’t, it’s not as crazy as it sounds because OpenAI, Google, all these big chatbots are designed to be generalists. They win if they can solve as many problems as possible, not if they can have the best answer to any one particular question.

But as a company, you don’t need to do everything. You need to do your thing better than your competitors. And so that’s where it feels like you get a model, you train the heck out of it, you put the work in, and you try and get that to the point where your answers are just way better. And what your competitor can get out of an open AI or a Google

Brian Thomas: Thank you. I appreciate you sharing insights where you think the future lies with some of these advancements, but also really sharing your thoughts around how you can get that competitive edge in this vast sea of AI work that everybody’s working on or leveraging, as you said, open AI. And sometimes it’s hard to kind of crack that net.

So, appreciate the share. And Brad, last question of the day. If you could briefly share looking forward to major trends, do you predict what might emerge in the developer tools and develop dev apps markets? And how should companies and developers prepare for these changes?

Brad Micklea: Yeah, well, I think there’s already a lot of changes.

I mean, you look at the GitHub copilot and the other kind of copilot and agent style AI offerings available for developers, and it’s, it’s fantastic. And I, for one, I know there’s always, of course, whenever these things, whenever things change a lot, there’s always people that the answer is, well, the sky is falling.

We’re all going to lose our jobs. There’s not going to be any more developers. I’ve never thought of it that way. And my, my background weirdly, although I grew up disassembling computers and programming, I actually did an English lit degree and did a bunch of stuff kind of more in the arts. And so I see these weird patterns and, and kind of relationships that others may not.

But whenever I’ve seen great developers’ work, it’s reminded me of an old master painters, you know, like a Rembrandt or whatever. And when they did these giant canvases that were, you know, 8 feet by 20 feet or whatever. They did not paint every brushstroke themselves. They had all sorts of apprentices and helpers.

They would do, you know, chunks of the painting that were really not that challenging, and they would focus on the stuff that was the hard part and that really the eye was drawn. And I think of developers that way. A great developer knows that if they can get rid of some of the drudgery of their work, which they have to do, somebody’s got to paint the canvas brown, you know, before you go and put the hills on, let the AI do that stuff.

And then you focus on the great stuff, the stuff that is creative, the stuff that is innovative. My CTO, for example, has had more time today to spend on strategic thinking than he had five years ago because he’s able to take that, those AIs and have them do some of that drudgery work. That’s exciting.

That’s how people should work. We want more time to think, we want more time to be innovative, to be strategic. The part that’s tricky is the operational side, because I think that There’s a lot of opportunities for AI to help there as well, but it is more sensitive and more delicate because there’s often a high time component.

Certain problems have to be solved within a certain window. You can’t kind of walk away and think about it as much and if there’s something going on in production. And so that part is going to be, that part’s going to be trickier. I’m not quite sure how that’s going to play out. I mean, obviously we’d love to have things be just self healing, but we’ve been trying for that for a long time.

And I don’t know yet that the LLM with its strength in kind of language processing is really going to help as much on things like self healing infrastructure, but I can’t wait to see what does happen. And I’m, I’m looking forward to being proven wrong there.

Brian Thomas: No, I appreciate that. Your insights are certainly spot on from at least resonate with me.

I’m sure it will resonate with some of our audience as well. But I love the fact that we can leverage AI to do some of the mundane tasks. So we can focus more on the creativity and innovation that we need to focus on to make us all better. So, appreciate that. And Brad, it was certainly a pleasure having you on today and I look forward to speaking with you real soon.

Brad Micklea: Awesome. Thank you. And if folks want to check out more about what we’re doing, you can head over to Jozu.com or KitOps.ml. Feedback is a gift. I’ve always said that. And so, any feedback people have for, for me, for the team, for the project, for the company, please send that in. I’m always, always appreciative.

And thank you very much for your time. Brian, this was awesome.

Brian Thomas: Absolutely. Bye for now.

Brad Micklea Podcast Transcript. Listen to the audio on the guest’s podcast page.

Subscribe

* indicates required