Gokul NA Podcast Transcript

Headshot of Gokul NA

Gokul NA Podcast Transcript

Gokul NA joins host Brian Thomas on The Digital Executive Podcast.

Welcome to Coruzant Technologies, home of The Digital Executive podcast.

Brian Thomas: Welcome to the Digital Executive. Today’s guest is Goku NA. Before founding CynLr in 2019, Goku began his career at National Instruments and worked as a software architect and specialist in vision, RF, and embedded systems. He was responsible for designing algorithms, serving customers across Europe, Taiwan, and India after a consulting stint and successfully solving 30 plus customer problems that were previously unsolved.

Vision conundrums. Goku and Nikhil started sin in 2019 to productize their vision approach at CynLr. Goku, along with Nikhil is building the organization to address technology and industry gaps in achieving the idea of universal factories. This requires foundationally rebuilding vision from the understandings of neuroscience to custom building imaging systems involving 400 unique parts and 200 partners from across the globe, and reinventing the industry framework to handle these operations, unique supply chain internal tools, processes, and talent.

Well, good afternoon, Gokul. Welcome to the show!

Gokul NA: Thank you so much for hosting me, Brian. Pleasure to be here. Absolutely my friend. I appreciate it. You’re hailing out of Bangalore, India. Currently I’m in Kansas City, so I appreciate you staying up late to do a podcast with me today. So Goku, I’m gonna jump into your first question.

You began your career at National Instruments as a SW architect specializing in vision, rf, and embedded systems designing algorithms for customers across Europe, Taiwan, and India. How did that experience shape your approach when founding Cynlr in 2019?

Gokul NA: So it exposed a lot to the gaps that we had in the mission, vision, uh, spectrum, and especially when it comes to automating tasks, using vision as a feedback or a sensory system for you to kind of handle.

One of the primary problems that we used to notice back in the time is close to if you have like a hundred problems, only, you know, 30% of those problems will 30 problems. We will be able to solve for every 10 problems that we attempt, only three problems we could commercially succeed. With all the platforms that we had, and that seemed to be a problem that we had across Mission Vision as a domain, the stalwarts, like Cognex, national Instruments, all of us faced some of those issues.

So we had certain soul searching at the time to see why is that vision success rate compared to the other product lines that we had within our companies, uh, was struggling so much to solve things. So one of our, each of them took different approaches and they kind of pivoted their businesses to different segments.

And one of the factors that I was analyzing is. Where are we failing the most? And so seeing where are we succeeding the most often? It seemed to be that when we wanted to use vision systems or visual feedbacks to guide missions, to do manipulation on objects, pick objects, or handle objects or move objects, is where we were failing the most.

Even if you wanna do an inspection. Right when it’s very static printed text and the things that, that doesn’t need your head to move or need you to observe from a different point of view is where mostly successes were happening. So this made us to ponder more on what are we missing in terms of understanding the visions need for manipulation and what’s missing in those cases.

And, uh, that shaped. Later for us in all the tech stack that we build, and also to make the robot a lot more self reliant. And also notice how camera is very limited. It gives you only sight. Vision is more about the involvement of touch and then grasping along with the visual sensing and the, you know, camera based sensing.

So that became the basis for cybernetics. And then what we do as cybernetics labs here and at.

Brian Thomas: Thank you. I appreciate that. And I didn’t know that the success rate for, you know, those problems trying to be solved, only three outta 10 problems. Gosh, that’s, that’s certainly, uh, an uphill battle there, obviously.

But I liked how you said you dove into what was not working, not focusing on what was working and using this type of technology using vision. To make it successful. Uh, I just love your tenacity there and, and diving in, so I appreciate that. And go. Cool. Before sin, you and your co-founder consulted on over 30 vision challenges that were considered unsolvable.

What’s the standout success story from that period and how did you reinforce your decision to productize your vision approach through similar.

Gokul NA: I think every problem kind of stood out. So because most of them were remaining unsolved for a longer period of time because they were picking them from a very traditional mission, vision approach, point of view.

I’m not talking about machine learning alone. I’m talking about the whole physics of how light works, how sensors work. How does a human brain go about analyzing it or how what? How they are able to pick. The aspects of objects and then analyze them, right? So how your brain goes about kind of understanding all these aspects, uh, of the object, right?

So the problems like, so now we had situations where you had a simple, very simple problem, like a bearing, it’s a roller bearing on top of which you have the needles and pins, and you need to be able to understand. In spite of the grease that is gonna be there, how many pins are there, whether they are properly oriented, whether the surfaces are good without having to remove those greases.

A human being seems to be much more adapt in doing that from there all the way to an extra collar set that you have to kind of assemble for a G healthcare right with a six access systems. And each of those are like very complicated mechanism that goes within them, close to around like. Maybe around 15 to 20 parts that have to be put together, assembled, and then put together, and all the way till even one particular problem of grain sorting grains and rice grains, where you need to do more than a ton of, uh, of grains that flow in freefall and gravity.

Nara, you can’t even see them. They move at like five meters per second, uh, of speed. And yet you should be able to point out where those grains are. Uh, what is the background? Is it a glass piece? Is it a stone? Is it a good grain or a bad grain? And, uh, all that within 56 microseconds. So on the other hand, you didn’t have a camera system off the shelf that can actually understand from that kind of, uh, you know, details, right?

So, so on that speed. So we have to build that from scratch, the whole camera from scratch, uh, with the FPGA system. So we did a wide variety of things underlying all this. There was an aspect of dynamic visual manipulation in every case where you are guiding motion. You are, um, you are handling variations in the objects which you have not been pre-programmed for, prepared for, and all those aspects is what we borrowed back later.

So we took our thesis that we learned from National Instruments, went out of NI in 2015, took all these 30 problems, all wide varieties of manipulation issues by your handling real world objects in different paces, different speeds, different formats for different purposes, but does my thesis work as a consistent one across all of these problems?

So that became proof for us later. To say that, okay, there is a standard underlying principle which is solving all these problems, which means we could productize them, and that’s a thesis with which in 2019 August, we raised a fund to productize what we have learned from our previous 30 systems.

Brian Thomas: Amazing.

I appreciate that. You know, I liked your unconventional approach to tackle these unsolvable challenge, and you really parallel it with it. How does a human being, how does a human brain perform these tasks? And the fact that you have to build something from the ground up, you built that camera right as there was nothing in the market.

That you were able to utilize. So building it from the ground up, understanding how that works, and again, just amazing what you’re doing. Goku, your Cyro robot system powered by object intelligence can handle unfamiliar objects in real time, much like a human infant. Where did this idea come from and what are the key breakthroughs that made this possible?

Gokul NA: This is again, um, I mean the learning for this again, came through. What are the, the other two questions that you asked before where our experiences were in the past in when we started analyzing what is that we are missing in the vision side when it comes to manipulation and manipulation oriented tasks for, to guide is we don’t treat motion as an independent sensing capability for a vision system.

We always think it’s a video processing and we do an optical flow. We track them from that point of view. What we don’t understand is. Or more than that. The way we approach in computer vision is we are hyper reliant on color for everything. Color pattern is the way we go about it, and we think that identifying an object and then later just finding its location and then feeding to a robotic arm system is enough for it to go and operate under them.

And if they fail, we either think the identification process is failed or the manion process is fail, and then we go and overemphasize on improving the designs of gripper, or we go and overtrain our machine learning models for identification. On the other hand, human vision like a baby, right? Which doesn’t need any of the prior understanding of the objects, right?

Edit is able to instantly go act on them. Like if I go give this airport, it doesn’t know that it’s an airport. It picks it in a mouth, throws it, picks it with its hand, puts it in the mouth and throws it out, right? That’s such an inherent ability. So any clutter you put them into a human being is capable of isolating an object out of it, even though he doesn’t know what he’s actually picking.

So this means we don’t have to know the object prior to acting on that object. Most of the failures here is my ability to inability to identify the orientation of the object. So, which means I should be orientation agnostic to be able to go pick them. So, but if I don’t know an object, how do I go and pick them?

So that’s the analysis that went into our process. And what seems to be happening is we are very easily able to see motion in our fourth layer of our neural network called the on of Ganglions. We use that ability to see the motion to connect all the disconnected contours. ’cause colors are, are always a cacophony of colors that you’re filled in with.

Now stitching one color to another color and creating an object out of it. This happens by the fact that I’m able to recognize one contour. I go and touch that contour when I touch and then move that contour somehow every other color contours that were there are able to associate with them and move along with it.

Your eyes ability to, to kind of highlight only the things that move together. Is what helps you to stitch an object together. So that’s the ability that we bring in into the camera and object intelligence also. And the second factor is when you are anticipating to touch them, how do I know how much force to apply?

How hard is it? Is it a soft material? And how can you heuristically go and correct your force applications onto it? What does that color mean in terms of force? For me in real world. So these two aspects is what we focused hyper a lot on, and we mirrored these behaviors from the human brain and we replicated some of these initial layers of our brain into it, right?

So that’s what we call us also as an object, visual object intelligence. And this is the basis by which the system, even if it’s a mirror finished object, whose color patterns are completely unknown, transparent objects, which just takes the background as its color feedback, it can still go and act on them, pick on them.

Brian Thomas: Wow. A lot has gone into this. Obviously, you’ve not, uh, wasted a a minute of your day in doing the research, and I, I just love this, in this learning process, you, you know, when you started, you asked what is missing in this vision technology? And of course you’re able to use motion color patterns, contour, et cetera.

There’s a lot of things in there. You know, you talked about force applications, you know, what type of pressure would be needed to, but at the end of the day, you’re mirroring this from a human brain and I think that’s amazing how we’re creating. Robotics at this level, so I appreciate that. Goku. The last question of the day at the AI for Good Summit 2025, you presented how decentralized micro factories can enable sustainable and personalized production.

How will similar’s approach reduce waste and make advanced manufacturing more accessible?

Gokul NA: So, uh, one of the primary factors that we have today is in the industry is the rigidity that automation enforces to the infrastructure, which also limits the customers to be able to react very quickly to customer needs when the broad, their consumer and customer needs, when the product designs have to change, they become a slave of manufacturability on one side.

And the other hand, the other vision of, we always keep mining from, um. For example, repair, converting your repair centers into factory is not possible today. And we are thinking of giga factories rather than thinking of universal micro factories on one side. So if I’m able to move all my products to be customizable.

A lot of waste stages actually go off from there. Right? And basically a concept of customized consumables or customized products. Why is it so hard today? Because you need a effort that transforms itself from one model to another model without any additional infrastructure needs. One second. Why do we always look to mine the earth for your new materials rather than trying to mine that out of the dustbin?

That’s because effort to extract the material out of your waste stages are higher than the effort to extract it out of the earth, and as in terms of volume. And the cost of the effort more than the effort that is being hired. The cost of the effort is being the bigger problem. When you are able to build a system which is capable of applying that nuanced skill as much as a human being is able to do, and it’s a one generic universal form factor that’s able to do for quite variet of items that can actually handle, then you have a free effort a.

Extract material from your, from your waste stages. So that allows you to transform this whole concept of looking to raw materials from earth too, looking to raw materials from the dustbin. This also makes another transformation for the customer. What McDonald’s has learned from Ford, Ford has failed to learn from McDonald’s is, is a typical statement I use to open these concepts, but ability to franchisee your manufacturing is missing.

Second because of which, if you’re going into a new market and you wanna set up a car manufacturing plant, you’re spending anywhere between $300 million to a billion dollars to set up a line. I’m just throwing a number. Like approximately. If you’re not able to sell a 30,000 units out of it, produce every month, then you are not profitable out of that line.

On the other hand market might say that I need four models and they are interested in, and you need four models to reach to the 30,000 cars. But for every model, you are now forced to build new lines. What if I could produce all cars in the same line, all platforms in the same line as Satan and and A SUV in the same line?

So you don’t have to look for hyper production, maximizing your production. Create a sales model and artificially, uh, create market demand for it, right? So that also saves on a lot of material waste and also in the amount of investment that you put into the, into the factory today, right? So if, what is the bottleneck there?

An infrastructure that is morphing itself to be able to act between two entirely different objects or product. So if I have a platform and a robot system that’s able to switch over between two different tasks back to back and uh, not have the issue that human beings typically have with quantum switching and produce the consistency of a machine as it is, then we have a phenomenal transformation that’s coming out, what we call as the universal micro factories.

Brian Thomas: That’s amazing. I’ve never really heard that concept, but you did delve into quite a bit here. And, uh, just to highlight a couple of things. We know that the market, the customer needs changes in a heartbeat. And, and how do we adapt so quickly and manufacturing’s not easily able to adapt, as you know, and you explained there.

I liked what you highlighted. Ability to franchise your manufacturing is missing and you’re looking to solve these things. You dove into a little bit of that, extracting raw materials versus material waste, recyclable, other alternatives to manufacturing, and the last thing of course, having robotics, being able to multitask, which humans can’t do very well.

But if we can do that, then again, as you said,

Gokul NA: missions can really do well. Yeah.

Brian Thomas: Yep. Yep. So I appreciate that. I really do. And go. Cool. It was such a pleasure having you on today, and I look forward to speaking with you real soon.

Gokul NA: Thank you so much, Brian. Thanks for giving a platform for me to express these thoughts and ideas that we have created at Seiner here.

Much appreciate the opportunity.

Brian Thomas: Bye for now.

Gokul NA Podcast Transcript. Listen to the audio on the guest’s Podcast Page.

Subscribe

* indicates required