Pratik Balar Podcast Transcript

Pratik Balar, Co-Founder of NodeOps

Pratik Balar Podcast Transcript

Pratik Balar joins host Brian Thomas on The Digital Executive Podcast.

Brian Thomas: Welcome to Coruzant Technologies, home of the Digital Executive podcast.

 Welcome to the Digital Executive. Today’s guest is Pratik Balar. Pratik Balar is the co-founder and tech lead at NodeOps, a Web3 infrastructure company, building decentralized compute systems that power the future of AI and cloud services. With over three years of hands-on experience, Pratik has architected and managed hybrid enterprise grade infrastructure across tier one cloud providers and bare metal environments, optimizing for scalability, uptime, and performance at production scale.

He’s deeply invested in building infrastructure that lowers the barrier to adoption for developers while reinforcing the long term sustainability of decentralized protocols.

Well, good afternoon, Pratik. Welcome to the show.

Pratik Balar: Good evening.

Brian Thomas: So I appreciate that you’re in Bengal or India right now, and so it’s early morning for me, but it’s evening for you, so I appreciate you making the time. Pratik. Let’s jump into your first question. You’ve been instrumental in building no ops, decentralized compute systems.

How do you envision DPN 2.0 transforming the future of AI and cloud services?

Pratik Balar: Thank you for your question. I mean, I think it’s a really good question when it comes to deep in 2.0, all I can think of is like scalability, cost efficiency, and privacy. So, deep 2.0, we’ll transfer AI and cloud services by creating, you know, more scalable, cost-effective and ecosystem.

Of course, by leveraging, uh, blockchain token incentives. D 2.0 also enables anyone to contribute. I compute resources like GPUs and storage, reducing reliance on centralized providers like Tier One Cloud Road. I won’t name anyone, but yeah, you got the idea. This like access to air infrastructure. Slashing cost to buy 70 to 90% in some of the cases compared to the traditional cloud providers, uh, while enhancing scalability through global.

Permissionless network, uh, all the deep in anybody can spin up onboard from any, uh, region of the world and they can be a part of, uh, whole deep in cloud or something like that. Deep 2.0 integrates real time data from iot and edge devices as well as we all know, like geo net and all are doing, enabling AI models to train and in for on diverse high quality data set.

So yeah, this is kind of how I, uh, envision deep into world and how this is transforming the future of AI and clouds.

Brian Thomas: That’s amazing and I appreciate you breaking out deep in. Obviously that’s so important. Now, in the decentralized world, what I took away from this is obviously the scalability and the fact that it’s cost effective, you know, leveraging blockchain technology for decentralized services.

You mentioned, uh, providing token incentives, which I think is awesome. And DNN 2.0 allows advanced integration. For example, you mentioned iot, devices, cloud, et cetera. So I really appreciate you highlighting that for us, Pratik, NodeOps focuses on building infrastructure that is open and trustless. What are the biggest technical and philosophical challenges in designing such systems at scale?

Pratik Balar: Building a NodeOps was right, to be honest. Like designing open and trustless infrastructure at scale like NodeOps faces technical challenges such as ensuring scalability, low latency consensus, and robust security against tech s and DDoS, uh, which we have faced during our testnet, and we are continuously facing, uh, DDoS on our main net as well.

I think last month we got somewhere around seven, like seven ish, 30 do. Luckily we have a lot of things that we have itegrated to take care of. Interoperability across heterogeneous nodes and maintaining data integrity and decentral, uh, decentralized environment are also crucial. We have to create a lot of in-house tooling to secure node to node connectivity.

Leveraging something like technology, which is being used on a lot of text like mtls, which is like mutual t TLS certificate. Uh, certificate, which will be given to both parties and both parties can communicate and authorize each of the communication using this single TLS. So this TLS will be mutual among both parties or something.

Those things like that we can do to, uh, you know, secure not to nor connectivity. This is again, one of the examples. Uh, second would be. Managing state across the nodes. So let’s say if I’m running a polygon, let’s say I’m a no ops orchestration and I’m running polygon network, or let’s Polygon validated onto your nodes and for some reason, for some XY reason, your node is down or under maintenance states.

So now I not only how to reschedule this node, but also to take whole state, which is like blockchain data to the other node as well. We have to syncing, we have to keep popping or we have to keep the backup of the state and we have to keep snapshoting this state and then migrate these two, some other nodes.

So these kind of challenges. Again, I would say creating everything in a non-cloud or like in a generic fashion was the challenge for us, especially. Apart from this, we are also researching around privacy, computation, with and without t. With T. If I view with T. That would be, uh, running something environment where your node be your nodes private key.

Let’s say your private key will be under T so nobody, not even a machine owner can see that private key and your node will continuously do its thing where it could be like chapter node it or it could be a node or it could be a Zika verify node, something like that. We are also, the thing is not that scalable, so we are also researching something around without tva.

We do something like fe like fully homomorphic encryption where artists can, or the nodes can directly do computation on top of encrypted data without revealing what’s in that data, something like that. So, uh, those kind of things we are these kind of technical challenges we have faced at scale. And this is something we are researching.

Brian Thomas: Thank you, and I really appreciate you helping us understand that a little bit. I know the vision was to build open and trustless as we like to do in the decentralized world. Having that secure node to node connectivity was important. I highlighted that, and then the managing that uptime between nodes is easier in your environment.

I appreciate that. Pratik, you’ve worked across cloud giants and bare metal environments. What are the trade offs between centralized and decentralized infrastructure from a DevSecOps perspective?

Pratik Balar: So that’s a really interesting question, which I get a lot, uh, during, like makeups and all from a DevSecOps perspective, which stands for developer security and operational guy, centralized infrastructure, like cloud giants, uh, tier one, uh, clouds offer streamlined management, robust security tools, rapid scalability, but introduce vendor locking higher costs and single point of failure.

Few weeks ago when I guess GCP was facing some issue and like one majority of region was down due to some para side pushes, some bad commits. And that also took down a lot of other things like Cloud flat and all, which is a major threat, um, uh, centralized world. I don’t know, like people have seen, but that notice was not down even though we are utilizing G CCP and all.

So that’s the beauty of, I think, deepening. By utilizing the bare metal decentralized infrastructure, we can avoid this totally. And as NodeOps is doing, we are utilizing, we utilize multi-cloud hybrid setup for everything which provides like a greater control, cost efficiency, resilience against censorship, but also faces challenges with, uh, consist security and complex orchestrion and slower deployment cycle.

So it require, uh, of course, higher maintenance, that now you don’t have a single place to make anything. Now it’s, and then you have to manage all those things. But a centralized system simplify compliance and monitoring as well. While decentralized setup requires a distributed trust mechanism and advance protocol as well, in some cases, which increase this operational co.

Balancing security, scalability, and agility remains critical in choosing the optimal infrastructure for a specific use case. In our case, it’s a bare metal plus mix of other tier one cloud. I think, uh, those are the like trade offs between centralized and D central infrastructure from DevSecOps per set.

Brian Thomas: Thank you. I appreciate that. You know, had someone else recently on here that was a big proponent of decentralized cloud and obviously you talked about GCP last week, see this every week with the major cloud providers. They’re more vulnerable to attacks. They may have a bad code push, then they’ve gotta reverse that out and it, it affects everybody and it’s hard.

But I like how you’re using, when we’re talking about DevSecOps, you’re using the decentralized environment, you know, you can streamline your management. There’s more cost efficiency, it’s secure. And then of course that distributed trust environment you talked about. So I appreciate you highlighting that.

Pratik last question of the day. D Pin 2.0 aims to be more developer friendly. What specific steps are you taking at no ops to reduce the friction for developers entering the Web3 space?

Pratik Balar: Again, good question. So we have a few features in, uh, not of cloud for Dells and, uh, for beginner dev, one of them would be, I want to highlight a few of them.

One of them would be our template based deployment, where any beginner dev or any dev, you can create a YAML based template of their every code. Lets, if you want to, to spin up a UniFi tab, and then you’ll just create a YAML file off it, and then you’ll put it into ops cloud. The YaMma file is very simple.

It’s just a wrapper of Docker compose or Kubernetes manifest, which is like, um, the way how you want to deploy your stuff. That’s what is defining that yama. And then your deployment will be taken care by Ops cloud Orchestrator. And then we’ll, we’ll schedule, it’ll maintain, it’ll show you up time. We’ll show you the traffic, we’ll show you where I’m, how much resources it being, it’s utilizing and all, and we’ll give you a link which will be enforced by all these security tools like.

You won’t face, like any kind of DDoS will take care of it. Your small depth won’t go down even if you face any DDoS and all. So those all things are like backed by all the orchestra things that we have created around it. And we are soon releasing a replica feature on top of it, which allows you to deploy your fabric cord on to some particular region.

Region. So let’s say if you are thinking that okay, you are getting more traffic from USA, we will give you the flexibility of creating. Multiple services into us. A to take to cater that vision of the world. So things like that. We are also planning. Uh, second would be AI Sandbox. AI Sandbox is a really interesting application that we have developed, a feature that we have developed onto cloud.

It’s like in browser code like Sandbox, Arment. With all the necessary tooling like no J Rust and all those fancy programming language in your inbuilt, again, into that sandbox. We have this, uh, GPU support. So you’ll have, uh, whatever Nvidia graphics driver, whatever you need to run your code. And this, this sandbox will have everything in order to start as a.

So now you don’t have to configure everything into your local. You have to download this packet, that packet, and you have to set up this program language that you just spin up a AI sandbox and everything will be set up. You just start developing at that point. And then you can even utilize vs code extensions, like, uh, let’s say go GitHub, copilot, and all to do your white coding as well.

So it’ll checks off the white coding, uh, checklist as well. On top of this, we have some features like package manager. So let’s say the PR code is not compatible with latest nor just uh, version. You have to drill it down to something like, let’s say second tier, like let’s say 22 or 20 version. So you can just do a single command and you’ll downgrade your programming language version as well.

We have something called Chrome Doc feature where you don’t have to visit your AI sandbox. Every time you just have to open it into Chrome and Chrome will show you popup that. Okay? Add it into your doc and if you’re a Mac or Windows person, just add it into your doc. And by using Single Click, you’ll have this whole your id, uh, programming ID into your local.

We have also this feature of a port channel where you can open any port from the remote machines. We are local. So let’s say if you are remote machine running some kind of AI model, let’s say through 200 billion parameter AI model and you want to access it to from local, you can just do a port tunnel and the wheel NodeOps will create a time to end port from your local machine to the remote instance where the whole thing is running.

200 billion parameter model is running, and then we’ll just, you can do whatever you, you want to use that tunnel for. So things like this is, then last would be RPC and AI APIs. So NodeOps has been continuously contributing towards RPCs, whereby creating public R So uh, you’ll be able to consume public RPCs of hyper liquid.

And then the second would be movement labs and the self chain, things like that. And we also enables AI APIs. So let’s say if you want to sell first your brand model, or let’s say you want to sell first, your own Gemini 30 billion model. Will we do it just DMS or you can also create it using your template onto our No OFS log.

Things like this are, uh, uh, like widely available for any, uh, beginner day or anyone who’s entering into the three Square can directly consume it and create. It’ll keep the whole or first five step, and you’ll have all the resources you want, and then you can just kickstart your journey into Web3 space.

Brian Thomas: Thank you. I really appreciate that. A lot of features there that you unpacked. Uh, I’ll highlight a couple anyway. Obviously you make it easy for your, uh, customers to get in there. It’s template based environment for large projects organizations, which I think is amazing. You have dynamic resource allocation depending on where the needs are.

It might be geographic, like you had mentioned in the USA, if they’ve got. A higher demand, you can easily shift that. Your AI sandbox obviously makes it easy to stand up and then deploy. I really like that. And then offering those AI APIs that you can easily plug in and get right to work. So I appreciate that, Pratik, it was such a pleasure having you on today, and I look forward to speaking with you real soon.

Pratik Balar: Thank you. Thank you for inviting. It was great to be on podcast.

Brian Thomas: Bye for now.

Pratik Balar Podcast Transcript. Listen to the audio on the guest’s Podcast Page.

Subscribe

* indicates required