Pramin Pradeep Podcast Transcript
Pramin Pradeep joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, Home of The Digital Executive Podcast.
Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand.
Welcome to The Digital Executive. Today’s guest is Pramin Pradeep. Pramin Pradeep is the Co-Founder and CEO of BotGauge AI, a US-based autonomous QA as a solution company, redefining how modern software teams ensure quality at engineering speed.
Using QA as a solution model bot gauge redefines quality assurance for fast-growing engineering teams. It combines AI native testing agents with four deployed QA pods to continually create, run, and maintain end test with owned quality outcomes. With over a decade of deep experience in low-code ecosystems and enterprise QA transformation, Pramin has built his career at the intersection of automation and scalable software infrastructure.
He previously helped scale a high growth startup from inception to 3 million in revenue, contributing to its acquisition by Sauce Labs. Pramin has worked with leading enterprises, including Adobe, Infosys, and Uncork to streamline software testing and quality operations.
Well, good afternoon, Pramin. Welcome to the show.
Pramin Pradeep: Yeah, thanks Brian. Thanks for inviting me.
Brian Thomas: Absolutely, my friend. I appreciate it. And I know you’re generally based out of the Bay Area, San Francisco, California. I’m in Kansas City, today you’re traveling. I understand. So, we’ll just jump into it here. Pramin, let me ask your first question here.
You describe BotGauge as an autonomous QA as a solution company. What’s fundamentally broken in traditional. Quality assurance models that require rethinking quality as a service rather than a function.
Pramin Pradeep: Yeah. Brian. So, to understand that we have to understand the history of the quality engineering or quality assurance has speed, right?
So here, if you take before 10 years, every application which came into production, the release cycle that does every changes was happening once in six months. Once in three months. However, as the development progress, the customer expectations started increasing. They want more and more functionality.
They wanted to get into the latest update, the use of SaaS, everything started coming into picture. Because of that, the competition increased and the release special started increasing for all the SaaS companies. So from once in six months, it started reducing into once in two weeks, then once in a week, and once in an hour.
So because of that. Automation become more prominent before tenure. If you ask anyone, they’ll try to say that, okay, only a few companies are doing automation. Maybe through playwright script or maybe through Selenium and open source script. However, right now the need of our ease for the QA to cope with the shorter release cycles with prominent automation in place for that only the AI can be input into it and it should. Most rigorous way into any SaaS ecosystem for that to happen.
Brian Thomas: Thank you. I appreciate that. And you’re absolutely right. I was a developer early in the days, and looking back, QRQA was pretty strict, right? We had some really structured release cycles, and as you talked about. Release cycles were not very frequent, but as the demand for more enhancements more functionality came about QA, it was hard to keep up with the QA.
And I totally get that. But what you’ve done is really brought an autonomous level of. Quality at a faster pay faster scale, a faster turnaround. So I appreciate that. And Pramin, BotGauge combines AI native testing agents with four deployed QA pods. How does that hybrid model improve speed, reliability, and ownership compared to purely automated or purely manual testing approaches?
Pramin Pradeep: Yeah, so I have been in this space for more than 10 years now. So, seeing the journey from an open source automation to local automation using NLP through the AI world, right? Right now what is happening, just handing over the agents or AI agents to the company is not gonna work because every SaaS or every software is different and go through a lot of customization.
So, the learning process will very important for any agents which get integrated into their ecosystem. So for that. What we have done different is like we enhance our agents and deploy into their ecosystem. However, a forward deployed engineer has to monitor and monitor and analysis the agent such a way that whether it is learning in the right format, whether it’s crossing the boundary condition or not.
So, all those things has to be constraints. All those constraints have to be kept in mind. That’s why we not only deploy the agents, there’s a forward deployed engineer, which monitors end to operation of the agents and also provide inputs if deviating from the part. So that’s why it is very important to have both agents and the human in the loop to make sure the customer is able to get the right output.
So yeah, the output is nothing but an increase coverage in a shorter period of time, just giving you numbers in place. Right. Consider they just go with open source core local code tool, which is available in the market. At least The customer is gonna take around four to five months to reach 80% of coverage.
However, with the agents which have built and the human in the loop, we’ll be able to do it in two weeks of time. So that’s the kind of onboarding of test cases and the coverage will be able to implement in any ecosystem compared to the traditional methodology which is there in place. So it’s an in increased efficiency.
Increased coverage, which will support their release cycles, and they can reduce their release cycle from two weeks to two days with the kind of deployment, which we do it in their infrastructure.
Brian Thomas: Thank you. And you’re right, you highlighted that hybrid model, efficiency quality faster turnaround times.
That learning process for AI agents does take some time to learn and you wanna make sure that it’s accurate. And I like how you use this hybrid model of having that human in the loop. You mentioned that forward deployed engineer to actually monitor and make sure that it stays within its parameters.
So, I appreciate the insights. And Pramin, many engineering teams prioritize shipping features quickly. Why do you believe the next decade of innovation will be defined by autonomous quality infrastructure rather than faster coding alone?
Pramin Pradeep: Yeah. Uh, so that’s a very interesting question, Brian, because right now, if you see from a development standpoint of view, there are multiple companies addressing that problem statement.
Like some call it a white coding. Increased level of coding enhancement through proms, everything which is happening. However, once the speed of coding is entertained by the ecosystem, right, however, they’re not able to release in the production because of QA being the bottom level. ’cause customers, you, as you rightly know, they don’t accept bugs or any flow broken.
Once that is done, they’ll just shift to competition. So that becomes the most important point. And. End of the day, it’s not about writing the code. They have to get release into the prod and customers should start using it. For that end-to-end, regression has to be done. That’s why it’s important to have an autonomous QA framework integrated into any infrastructure to support these release cycles in place.
So, what I want to tell the dev community, yes. Writing the code is much important. However, the most important part is shipping to the. Customer to get the feedback loop established. For that, you have to tightly integrate the autonomous QA also into the frame.
Brian Thomas: Thank you. Appreciate that. You did highlight some things that are happening at all different levels of sizes of organizations, but the speed of AI agents, with this low code, this no code.
QA is definitely the bottleneck right now, and people can’t wait for things. It’s just kind of how we are in our human nature. But you highlighted the fact that having that develop and integrate that autonomous, those autonomous agents in that pr QA process will help speed this along. And obviously we want right to have quality output of course.
So, I appreciate that. And Pramin the last question of the day. As BotGauge scales following its recent funding round, what will separate autonomous QA platforms that truly deliver outcomes from those that simply layer AI on top of legacy workflows?
Pramin Pradeep: Yeah, so, it’s like, you cannot modify the existing infrastructure and trying to make it AI by just adding a layer to it, right?
You need to build everything from scratch, especially if you’re going for AI first companies. For example, the traditional players where their code is already written and they’re not able to even Let me touch upon one of the major pain point in automation that is self feeling maintenance. When a element moves around or change, or the flow changes, right now, the traditional framework cannot poop up with it.
These kind of autonomous being, the nature that means the ins, initial algo has to be written for ai. That’s why it’s very important for any AI for companies to build from scratch. Then adding in layer on top of the existing infra. That’s what blockage is for. Bot gauge is an airborne company where we just started in the AI LLM era where the all the code, not only the LLM.
Infrastructure layer, but also the algo to support that has been written from scratch to enhance the agent from the first learning approach. How can it refine from the, or how can it refine to the most extreme level of learning the end-to-end application in a shorter period of time.
Brian Thomas: Thank you. I really appreciate that.
And you did talk a little bit about especially with your company, but. AI companies in general and AI platforms, it is best to build everything from scratch If you’re building that type of infrastructure. As you know, there are problems with adding multiple layers or adding AI layer just on top of legacy workflows.
Obviously that is going to add more complexity and more problems down the road. So again, I appreciate you teasing that apart for us and Pramin, it was such a pleasure having you on today and I look forward to speaking with you real soon.
Pramin Pradeep: Yeah, thanks. Thanks for inviting me to this podcast, Brian, and appreciate Yeah, we’ll catch up in person after.
Brian Thomas: Bye for now.
Pramin Pradeep Podcast Transcript. Listen to the audio on the guest’s Podcast Page.











