Andrea Iorio Podcast Transcript
Andrea Iorio joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to the Coruzant Technologies, home of The Digital Executive Podcast.
Welcome to The Digital Executive. Today’s guest is Andrea Iorio. Andrea Iorio is one of the most requested keynote Speakers about ai, digital transformation leadership and customer centric globally. He shares his thoughts and ideas at the intersection of business, technology, philosophy, and neuroscience in his more than 100 keynotes per year to many Fortune 500 companies such as Abbott, Bayer Cargill, Dow, IBM, Roach, Syngenta, Tetrapak, and so many more. He’s a columnist at the MIT Technology Review Brazil. The official host of NVIDIA’s podcast in Brazil counts with more than 100,000 followers on social media and has been ranked among 15 main global AI influencers on LinkedIn by tap. Leo.
Well, good afternoon, Andrea. Welcome to the show!
Andrea Iorio: Thank you so much, Brian. Such a pleasure to be here.
Brian Thomas: Absolutely my friend. I appreciate it. I know you do traverse the globe, sometimes virtually, sometimes physically, and I just appreciate that you’re, uh, calling outta Miami today. I’m in Kansas City. And let’s have a great conversation.
So Andre, I’m gonna jump into your first question. Your book between you and AI, published by Wiley, lays out a new framework for leadership. What are the most misunderstood or overlooked skills that in that framework, and how do you advise leaders to begin shifting towards leadership today?
Andrea Iorio: Yeah, Brian, I mean one of the biggest misconceptions nowadays I think, is that all leaders should become very technical experts and understand how AI works in detail and, you know, master the technical aspect of it.
But although I think this is important, important to understand the technology, it’s impacting the way it works. I think it’s even. Important to understand how we should reshape how our human skills in face of ai. And the book is exactly about that. The reality is that whenever we look at AI’s ability at substituting or performing some tasks, it is much better when these tasks are under the domain of.
The hard skills, all the skills that we can acquire through studying, mastering and practicing. But the problem is that AI is not good with the soft skills, and that’s exactly where the human edge is. And so some of the overlooked skills include. What I call in the book Data Sense Making. If AI is better at pattern recognition, we humans must you know, strengthen our intuition and being able to critically think about the output that ai, sort of like spits out.
I talk about Reper, percept, the ability of seeing problems from different perspectives, empathy agency, and a number of other. Skills that I sum up across three big pillars of leadership change. One is the cognitive, the second one is the behavioral, and the third is the emotional. So as a practical advice, I think, leaders should start by improving their questions, not just their answers, and rethink their roles in organizations and in their day-to-day life.
Brian Thomas: Thank you. Appreciate that. And I know there’s a lot we could unpack even further here, but at the end of the day, the world is sent around humans, human connection and human behaviors. AI is really gonna throw a wrench in it in some ways, but we can obviously tease those apart, like you said, and leaders should master that AI technology.
But as humans, we should focus on those soft human skills. And I liked how you highlighted strengthening your intuition, your reper percept as you called it. But I really do appreciate that there’s so much we could talk about this, but moving on to your next question, Andrea, having led digital operations as the head of Tinder, Latin America, and as the Chief Digital Officer at L’Oreal, Brazil, what are some lessons you carry forward from those roles, especially when balancing scale, experimentation, and customer experience in your current work?
Andrea Iorio: That’s for sure, Brian. I mean, whenever we look at how different, especially on paper, are companies like Tinder, digitally native, much more recent company that of course changed the way people relate to each other, especially single people and L’Oreal company, a behemoth in the beauty sector with more than 110 years of market.
Of course, it’s easy to see the differences, right. With Tinder, of course, the scale, uh, was very fast because of the digital tools that we would. Use. L’Oreal of course, had a lot of legacy. We met, digital tools through experimentation and change management. But I think the commonality between the two brings me to the main lesson I got through these two experiences is that, customer centricity is more important than ever, and it’s a common denominator across any industries.
Then it’s more important than ever in the age of ai because getting back to the. Topic of the book and the technology that is, the most talked about today. The interesting thing about AI is that in it empowers much more the customer of any company, any industry, if we think about it, through infinite access to information, lower switching costs, more competition in the market because new competitors can pop up anytime.
Lower barriers to entry markets, and especially people are now content creators. With this, it’s sort of like a toolkit for the customer of any company have much more leverage when it comes to the relationship with companies being them, startups like Tinder, being them, behemoths and traditional companies like L’Oreal.
So I think the commonality and the lessons that I’ve learned is that we need to understand that the customer is more and more empowered today, and that, leaders must be able, again, to blend the technology, but also to have that human insights. That are needed to deliver better customer experiences because last but not least, there is a study by the CCEB Leadership Council with Google that recently showed that the emotional aspect of any transaction is double the weight when it comes to, a successful transaction or not than the economical outcome.
So it’s not anymore about the cost or the price the customer pays, but it’s about the emotional involvement that he or she has across the customer journey. And I think. I’ve learned that through very different experiences, but with this commonality at Tinder and L’Oreal,
Brian Thomas: thank you. And you did really tease apart those two examples, right?
There are major differences in companies, digital companies like Tinder versus legacy giants and traditional brick and mortar. But the thing that really stuck out for me is that customer centricity is more important than ever, which I totally agree with. And leveraging AI to blend that technology and augment that customer experience.
To enhance that emotional involvement every step of the way. Like you said, people prefer to have really people that are catered to their needs and are more sensitive to that customer experience, so I appreciate that. And Andrea, you’ve asked, what’s the point of implementing the most advanced ai If people aren’t engaged and ready to use it, how do you help organizations strike the balance between deploying powerful AI applications and building human readiness?
Like skills and mindset and governance to use them as well.
Andrea Iorio: Yeah, sure. Data and recent data by the MIT Media labs show Brian definitely that most AI pilot projects fail. Actually 95% of them and this points. To the factor that, there’s something that is wrong with AI implementation nowadays.
And my thesis is that it’s actually the human part of it. Because oftentimes we roll out very advanced technologies, but we don’t have the. People ready to actually use these technologies. And with ai this is at an exponential factor. If we look at, readiness across organizations it lacks across three main pillars.
Definitely the skillset, which is the topic of the book. We need to train more and better people on how to work with ai. The second one is the mindset. Right, because there’s a huge problem here. Lots of people see AI as a replacement, right? As a competitor, it’s coming for my job. The truth is that it’s coming for some of our tasks, right?
But our jobs need to be reshaped by a copilot that is ai, and again, not a replacement tool. So we need to see AI as augmentation. Not just as automation, right? Automation is just the substitution of human tasks. Rather, augmentation is the enhancement of the quality of human tasks thanks to ai, and I think that’s exactly the point we want to get to when it comes to the mindset, seeing that as an opportunity and not a as a threat.
And the third thing is governance, right? A whole, a lot of issues related to accountability, transparency of AI tools, ethics, even the, over dependency issue, which can happen the more our people use AI tools. Studies again, show that brain engagement decreases, and that’s a problem because if we have people within the organization that just copy and paste what.
AI says, well, we’ll have a problem ’cause that content, that output can be biased can be the result of hallucination, can, suffer of generalization. Problems. And it can be, not really transparent because of the fact that no one really understands how AI made that decision. Not even the developers, right?
So I think that leaders in organizations must. Balance these powerful tools with human trust and engagement. And that comes from training. And whenever we look at budgets within organizations, the vast majority of budgets go towards the tool, towards the technology. But definitely not as much investment goes to people.
And I think that should be rebalanced.
Brian Thomas: Great. Thank you so much. I’d like to highlight just a couple things. You know that recent MIT study, the data says that most AI projects are failing. And I believe you mentioned 95%, which is really high. So you talk about that AI readiness, obviously that skillset.
That mindset and governance, which is really important to me. We’ve talked a lot about that here on the podcast. But balancing these AI tools with human trust, and as you said, rebalance, I think that’s more important that we do put more focus into the human side of it. So thank you. And Andrea, last question of the day.
As AI and Web3 continue to evolve, what kind of future do you hope to help shape, especially around leadership? Human flourishing and social impact. What ethical boundaries or guardrails do you believe every organization must adopt now to stay aligned with human values?
Andrea Iorio: Well, Brian, the goal definitely, at least from my perspective, is to have AI as an amplifier of human potential and not as a replacement.
That’s the end goal, and in order to get there. Definitely we want to have some ethical guardrails, right? At least one. Or maybe the main one related to the transparency issue. There’s a big problem with AI that can be called a black box Problem is we don’t really understand how AI makes its decisions, right?
And so the problem with that is that. If I’m a bank and I start using AI to approve credit or not to my customer, then I might have an angry customer who’s been refused access to credit, coming to my, human manager and asking why was I denied credit? Well, we cannot really answer that, and that is what breaks trust, right?
Especially in this world where customers are more and more empowered than have. Again, infinite access to information. So transparency is very important. ’cause again, it’s not only working, with the end customer, but also with the internal customer, namely our employees, our teams. If AI is not transparent, that’s a very big problem.
And so we have to implement, responsible ai. The second big ethical guardrail that I think is needed is related to the accountability issue, right? Whenever we look at AI nowadays, it’ll, it is not really responsible for its decisions or its actions when it comes to AI agents ’cause neither legally nor technically nor morally.
And so the problem is that people forget that the responsibility of the way we use ai. Is on us. And so, whenever we make and we outsource more and more important decisions to ai, we need to always understand that we are responsible for those decisions. Think about military applications, right?
We humans, even people who pilot drones, distance feel emotions when they do that, or maybe when they bump a nuclear or a military target. AI does not, and that’s a really big problem. And the third big ethical guardrail that we need to put in place is privacy data privacy. Whenever we look at the massive amount of data, in the case of G PT four was thir, I think it was 13 trillion tokens used to train the model.
Well, it means definitely some private data are in there, and the problem with that is. Who owns that data? If an AI company monetizes that data, how comes the, originator of the data is not being compensated for that. And so there’s a whole lot of topics related to privacy. And so the guiding question, just to sum it up, must be, okay, so does this technology, does AI help people thrive Because.
If it ends up not doing that, I think we’ll have a problem. And if yes, well that’s when we’ll have stronger businesses and stronger society. But it, again, it’s our responsibility to shape such a word.
Brian Thomas: Thank you so much and you highlighted some great points there. We definitely need to be building in ethical ai, these guardrails and you just to, again, to highlight transparency, accountability, and privacy.
So, so important. And you did go into each one of those pretty far. And at the end of the day, we humans are responsible for our decisions with what we do with ai. And I think people need to really wake up and make that a priority. Right now, it’s big tech, a lot of companies, a lot of competition. Who’s gonna be first?
And that’s all I’m seeing. I do this podcast multiple times a week and I talk to CEOs outta Silicon Valley that develop this technology. And it is kind of scary in, in some ways if we let it get ahead of us. So I appreciate that. And Andrea. It was such a pleasure having you on today, and I look forward to speaking with you real soon.
Andrea Iorio: Likewise, Brian. Such a pleasure being on the podcast and thanks everyone for listening.
Brian Thomas: Bye for now.
Andrea Iorio’s Podcast Transcript. Listen to the audio on the guest’s Podcast Page.