Shashank Kapadia Podcast Transcript
Shashank Kapadia joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, home of the Digital Executive Podcast.
Welcome to The Digital Executive. Today’s guest is Shashank Kapadia. Shashank Kapadia is a recognized leader in machine learning, specializing in large scale AI solutions that don’t just drive business impact, they redefine how enterprises harness intelligence. With over a decade of experience at Global Industry Leaders, he has pioneered cutting edge machine learning innovations across personalization, search, and retail, optimizing revenue, enhancing engagement, and transforming decision making on a large scale.
Beyond technical execution, Shashank is a vocal advocate for ethical AI fairness and transparency, ensuring ML systems serve both business goals and societal needs. His thought leadership extends beyond the workplace. He’s a sought after speaker at global AI conferences, a published researcher in NLP, and an expert judge for top tier industry awards and hackathons.
His insights on the realities of machine learning, from the myths of real time AI to the challenges of model deployment, Resonate widely across the AI community, a valedictorian in operations research from Northeastern university, Shashank blends deep technical expertise with high level strategic thinking, making him the ideal guest for conversations on AI at scale, ML and production, ethical AI, and the future of machine learning systems, whether debunking industry hype or sharing hard won lessons from the field, he delivers candid, actionable insights that leave audiences informed and inspired.
Well, good afternoon, Shashank. Welcome to the show!
Shashank Kapadia: Thank you so much, Brian. Thank you for having me. It’s a pleasure to be here. We’re really excited to dive into today’s discussion.
Brian Thomas: Absolutely. Thank you, my friend. And hailing out of that Silicon Valley area, I know you’re near San Jose, San Francisco area, and I appreciate that.
I have a lot of guests out of there. So again, Shashank, I’m going to jump right into your first question. How have your machine learning solutions redefined the way enterprises harness intelligence, particularly in organizations like Walmart or Randstad?
Shashank Kapadia: Yeah, that’s quite a loaded question, Brian. I suppose one of the most transformative projects that I’ve led so far involved essentially reimagining how Global Staffing Company approached search and recommendation systems.
We didn’t just build a machine learning model, we redefined how intelligence could be harnessed at scale. And this is by leveraging years of global data that’s available, we were successfully able to move from a traditional keyword based search and matching to creating deeply personalized experience that were powered by deep learning and machine learning algorithms.
The key breakthrough was in understanding the context. We were able to build The neural networks that were capable of deciphering subtle patterns in the behavior of thousands, if not millions, of candidates and employers globally. And this wasn’t just about improving the results or relevancy. It was also about creating a personalized system that felt both intuitive and almost anticipatory for users who are using it.
So for example, our models could infer candidates career trajectory or employers hiring need based on nuance signals that we were able to collect from the years of data that we had and the results. A platform that just didn’t retrieve the information, it basically delivered the insights that almost felt like tailor made for that specific candidate or that specific recruiter.
Of course, working with such a vast and diverse data sets come with its own challenges. Had to design robust data pipelines, optimize the model performance, ensure that it can scale up to the scale of delivering this at a global scale, but The real wrestling here is again, machine learning isn’t just about solving today’s problem.
It’s about laying the groundwork for the future innovation and how we can really harness the data that’s available at our disposal to make the consumer’s lives better.
Brian Thomas: Thank you for sharing. And I can tell you’re really passionate about what you do, Shashank. Machine learning is something in itself.
It’s a monumental feat to build something, especially to filter thousands, like you said, tens of thousands, maybe millions of different candidates through the system. The outcome being providing that best customer service. around that whole process. As you know, we want to not alienate new employees. We want to bring on new employees.
So doing that in the, in the talent space is very challenging.
I understand. So I appreciate your insights. Shashank, you’ve spoken about the myths surrounding real time AI. What are some of the common misconceptions and how can businesses navigate these challenges?
Shashank Kapadia: Yeah, I’ll be honest with you, Brian. I think the term real time AI sounds like it’s unlocking a superpower, but in many years of my experience engineering the system, I’ve learned that it’s more of a myth than a magic in itself.
Here’s what I’ve seen, what I’ve learned. First things first, the speed isn’t everything. Too many people equate real time with instantaneous speed. But if your data is messy or your model isn’t well calibrated, faster productions just means faster mistakes. The second is also continuous overload. The idea of having models that can update on flies is very appealing from a stakeholder’s perspective until you face engineering challenges.
Constant data ingestion, on the fly model adjustments and skyrocketing compute costs, not to mention, can quickly turn an innovation or a POC into a maintenance nightmare. I’ve been there, I’ve debugged pipelines at odd hours and realized that a more of a pragmatic approach often delivers better results in the longer term than trying to chase the speed.
What that also leads to is overengineering the pipeline rather than chasing the continuous updates. I’ve found that grouping things into a micro batches even as quickly as 5 to 10 seconds can capture most of the benefits of a real time processing without any chaos that comes with maintaining those systems.
This strategy basically involves you pre compute a lot of things that your model would require to make an inference, cache a lot of values and then using an event driven rules to handle urgent cases efficiently. I should add that there are definitely use cases where you absolutely need real time capabilities.
Think about a fraud detection or a high frequency trading or an autonomous vehicle. However, in most scenarios, think of 9010, 90 percent of your use cases are perfectly served with near real time approaches, while there are only those few scenarios where, at least in today’s age, it truly demands a full fledged real time system.
In short, what I advocate is fast, not furious machine learning by decoupling these components, embracing a simpler, robust architecture, you get a lot more reliable performance without constant technical firefighting. So real time ML is silver bullet. It’s about creating a system that can deliver insights at the right pace every single time.
Brian Thomas: Thank you. And you have a lot of insights and knowledge and experience to share there. And I appreciate that. And I had someone in this space tell me that once you give AI a command, it can’t be reactive, at least as well. And I can see how this could become very cumbersome and a customer service nightmare.
It’s almost like writing custom code every day for an application that everybody’s in the organization already using. So I can only imagine what that’s like. So thank you for the insights. Could you share insights from your published research in natural language processing and its practical applications in industry?
Shashank Kapadia: Yeah, absolutely, Brian. I think NLP is one of the most exciting frontiers in AI. At least it has been for many decades, but has gone to a prominence in at least the last 10 years. But what I have found both from an academic standpoint and practicing in the industry is that its true potential lies in the ability to go beyond just a basic language understanding and truly how can we make it resonate with the specific domains that we are applying such technology in.
So in my research and applied work that I’ve done, I’ve focused on taking the state of the art NLP models and transforming them into domain specific powerhouses. It’s not about deploying one size fits all solution. It’s about creating systems that can understand the unique voice, context, and the priorities of your own business and the solution that you’re deploying this for.
So one of the projects that comes to my mind that I’ve worked on involved adapting an NLP model for a highly specialized industry domain. The challenge wasn’t just about training the model on a significantly large amount of datasets, it was about fine tuning it with the proprietary data that captured the nuances of the domain.
So, for example, think about when we are using NLP models. For a specific application in a specific industry, there are industry specific terminologies, workflows, and sometimes even cultural contexts that you want your model to understand, and that can only happen as you fine tune these models based on them.
the data that you have collected for a number of years. And this level of customizations can only lead to a significant boost in accuracy and more importantly, can unlock the insights that can directly influence not just the business decisions, but also the overall customer experience. So in my experience, NLP isn’t just about understanding words.
It’s about understanding your world, whether it’s optimizing customer interactions, extracting insights from unstructured data, Or automating complex workflows. The key here is to tailor the technology to your specific needs. And that’s where the magic happens.
Brian Thomas: Thank you. I appreciate that. NLP is amazing.
So powerful. So much you can do with it. And like you said, though, there are some nuances and every place in the world has a different business need. Maybe you’re in a different country, different language, different culture. There’s just so many ways that it can be customized to approach a particular business idea, solution, or just the customers in general in that particular part of the world.
So I appreciate that. There’s a lot there to unpack for sure. We could spend hours on it. And Shashank, the last question of the day, what emerging trends in AI and machine learning do you foresee having the most significant impact on industries in the next few years?
Shashank Kapadia: Excellent question, Brian. I wish I had a crystal ball to predict where things are going.
Well, we can always look at some underlying patterns and trends and see, at least in the next few years, where things might be. And specifically when we discuss about AI landscape, it’s evolving at an incredible pace in today’s age. And there are several emerging trends that, in my opinion, are poised to reshape how we build, deploy, and think about machine learning systems going forward.
So here’s what I believe will have the most significant impact in the coming years. One of them is democratization of foundational models. We saw when the first of the foundational models came live. There were very limited models or not even limited one or two models out there in the world and they were a proprietary AI systems.
But what we have witnessed since then is a seismic shift from exclusive proprietary AI systems to more accessible and adaptable foundational models. And this is a game changer. Imagine having access to a robust pre trained model that you can then fine tune with your own data. This is no luxury reserved only for tech giants as it used to be a couple of years back.
I have seen it firsthand how this approach can essentially level the playing field, enabling organizations of all sizes to innovate and compete. The second thing that I’m underlying, looking at the underlying trend here is also the pragmatic integration over real time hype. We briefly covered this earlier on our discussion, but as I’ve often argued, real time AI usually oftentimes is overhyped.
The future isn’t just about chasing the speed for its own sake. It’s about the smart integration. And as we look into integrating AI solutions in a variety of different workflows, we can really start to see how beneficial it can be for both the consumers and businesses. The third area that I’ve seen an emerging trend into is around de federated and edge learning.
So the rise of on device Inference as well as training is transforming how we think about data and computation. By bringing AI closer to where the data is generated, which is our cell phone devices or our laptops, for example, we are reducing Not only the latency, we are also improving the privacy of data and we are inching towards a more democratized compute cloud instances where some of this real time decision making can be done in a distributed environment.
This trend is particularly critical as we scale AI in a more responsible way, ensuring that the data remains secure and in a decentralized form. The fourth I’m seeing is around ethics, transparency, and vertical integration. So again, as AI becomes more and more pervasive, the need of ethical transparent systems has never been greater.
The next generation of AI models will prioritize governance. explainability and fairness alongside the raw accuracy and raw performance. In my opinion, this isn’t just about a regulatory requirement. It’s going to be about a competitive advantage for organizations who can build the trust. Transparent AI will lead the way.
And last but not the least, the plateau of data and the proprietary advantage that we have seen in the recent years in many domains. Today, the exponential growth of data that we used to train this models is plateauing. The new frontier isn’t just about who can build a new foundational model. It’s about leveraging the model that you already have in a smarter ways.
Proprietary data, when it’s governed and engineered effectively, will become a strategic asset for any organization. And so the organizations that can turn the data into the actionable insights through scalable, well designed ML solutions will have a lasting edge.
Brian Thomas: Thank you. Really do appreciate that. And we talk about the crystal ball here on the podcast, where we looking ahead at emerging trends.
And sometimes we have a good idea, but what you shared this evening was really important. And two things I took away from it. Shawshank is one is obviously ethics is so near and dear to my heart around AI and the development of AI as it advances ethics is key. And I think really key to the survivability of the human race.
Some may say that might be a little bit extreme, but I think it’s so important. But the other one I liked is the fact that you now can compete with big tech and you will have a personal assistant. You know, AI that will learn your personal traits, your daily schedules, and that sort of thing, which is, I think, amazing.
And I’m glad you highlighted those tonight. And Shashank, it was such a pleasure having you on today, and I look forward to speaking with you real soon.
Shashank Kapadia: Yeah, likewise, Brian. It was wonderful having this discussion with you, and I hope the listeners will find these insights to be useful in their day to day life and strategies going forward.
Brian Thomas: Bye for now.
Shashank Kapadia Podcast Transcript. Listen to the audio on the guest’s Podcast Page.
Disclaimer:
Shashank Kapadia’s comments and opinions are provided in their personal capacity and not as a representative of Walmart. They do not reflect the views of Walmart and are not endorsed by Walmart.