Eva Marie Muller-Stuler Podcast Transcript

Eva Marie Muller-Stuler headshot

Eva Marie Muller-Stuler Podcast Transcript

Eva Marie Muller-Stuler joins host Brian Thomas on The Digital Executive Podcast.

Welcome to Coruzant Technologies, home of the Digital Executive podcast. 

Brian Thomas: Welcome to the Digital Executive. Today’s guest is Dr. Eva Marie Muller-Stuler, Dr. Eva Marie Muller-Stuler is leading the data and AI practice for Ernst and Young in the Middle East and North Africa. She’s responsible for the development of data and AI governance frameworks, strategies, and the implementation of complex data science and AI projects and transformations.

Previously, Dr. Muller-Stuler was Chief Technology Officer for artificial intelligence and chief Data Scientist for the Middle East and Africa. At IBM, after studying mathematics and dissertation, Dr. Eva Marie  Muller-Stuler started her career in advising European companies on restructuring and performance optimization.

For this, she developed many first of a kind data methods and models at KPMG in London. She led one of the first data science teams in Europe to develop groundbreaking data-driven methods and techniques since 2013, after seeing the impact of her work on industries and society, she spearheaded the development of ethical and responsible AI policies and frameworks with governmental and non-governmental organizations, and highlighted the possibilities and impact of data science and AI on the economy, society, and individuals.

Well, good afternoon, Eva. Welcome to the show.

Eva Marie Muller-Stuler: Good afternoon, Brian. Thank you for inviting me.

Brian Thomas: Absolutely. I really appreciate you doing this today, making the time, and you’re currently calling out of Dubai, which is halfway around the world and I appreciate that. It’s hard to make time zones work, so again, do appreciate your flexibility.

Eva Marie Muller-Stuler: Absolute pleasure.

Brian Thomas: Thank you so much. We’re gonna jump into your first question. Your academic background is in mathematics, computer science, and business. How did this interdisciplinary foundation lead you to a career in data science and ai, and what motivated your focus on these fields?

Eva Marie Muller-Stuler: I. It is actually a brilliant question because when I was studying at university, everybody was asking me, why are you combining these random subjects?

And I actually remember having a job interview where the first interviewing me was so hard on myself and was like, oh, you are all over the place. You have no. Clear red line. It’s a bit of business, it’s a bit of science and, but I started my career in financial restructuring and then joined the team that was financial valuation and business modeling.

So we were building huge Excel models to forecast cash flows, p and l, balance sheets and so on. So that were the early days of very small data science in a way, or data analytics on Excel. And that field actually really connects the dots. I was suddenly one of the most looked for combinations. We still love technical people who actually understand business, but back in the days I really just decided to study math because I really enjoy it.

I still really love it. Um, for me, doing mathematics is like meditation and combination of computer science and business gives you a good way of applying it. And so that was my thought behind it.

Brian Thomas: Thank you for this story. Appreciate it. Our audience does too. And yeah, I can imagine someone saying, why are you so spread out?

You have no vision, you know, no, you’re not in science, you’re in business. But I can see that you, that is your passion with the mathematics and you can see where you can connect the dots between the technology and the business. And really you’re helping the business by having that technology background.

So I appreciate that. And Eva, since 2013, you’ve been a proponent of ethical and responsible ai. Collaborating with various organizations to develop related policies and frameworks. What pivotal moments or experiences led you to champion this cause?

Eva Marie Muller-Stuler: When we started going into proper AI and data science in 2013, it was the Wild West.

Before GDPR, we called one of the world’s largest phone companies and said, can we have all your data? And they didn’t ask why. They didn’t ask, say no, they just were like, yeah, of course. But how so? We didn’t even sign a contract or anything with them and send one of our juniors over with plus strikes.

And, and suddenly we knew exactly who called whom, from which location for how long, how many times a day, and where do they spend the nights? Where do they spend the days and the weekends. And we are sitting there and looking at each other and thinking like, we don’t need to build Excel model AI models at all anymore.

We can just go and blackmail people with that information. If we realize somebody is sleeping in a hotel closer than five kilometers to their home, they’re probably cheating on their wives. And with that, we also realized our models became slightly biased with the whole data infrastructure we were feeding them.

We saw then try to take things like religion, gender, and so on, out of ethnicity, out of our models as inputs data. And we still realized that the models are able to pick up the biases. That made me realize on top of that, we had the big scandal with Cambridge Analytica, that there is a big, big change coming in society.

Ai, in my opinion, was always here to stay and have an impact on how we do business, how we interact with each other. I. I realized, okay, we do have to make sure that we don’t leave certain demographics behind. We do have to make sure that it is explainable and it’s transparent and, and that’s why I started campaigning for that and said there is something becoming governance, need to be aware of it, and governments need to have rules and regulations to govern it, that it doesn’t get out of hand and that the people we need to protect are protected.

Brian Thomas: Thank you. And I think that is so important. People, you know, your example, early on we saw how people were kind of carefree with sensitive data. They weren’t thinking ahead. It’s not like they were trying to be, you know, malicious, but they were, I think, just a little bit naive or a little bit carefree with that data.

And of course you saw some of the bias in the data as well. So I think you’re spot on and, and governance is so, so important around data and ethical ai. So Eva, you’ve emphasized the importance of a strong AI governance. Could you elaborate on key components that constitute an effective AI governance framework and how they contribute to responsible AI deployment?

Eva Marie Muller-Stuler: I. So responsible AI for me goes for the whole chain from the beginning to the end. So it is making sure that your data isn’t biased and that you actually know what’s in your data. It starts with a question, does it make sense to build that use case? Is it really right? In everything we do in the world, we have the pressure of having a, um, high ROI and getting our money worth on our investments.

And of course, it’s always easier if you focus on white male demographics to get good results with a million dollar investment than if you say, we want to make it fair for even African American women over 65, and we have no data over them. Suddenly things become very expensive. And so it goes from do I have the right data?

Do I have the right demographics? Am I transparent about it? Is the use case correct? How do I build it in an accurate way? Um, to all the way of on the technology side, having a clear ML ops framework and monitoring and retraining the model and saying, okay, we need to make sure it stays accurate. We need to make sure it stays secure and protects your privacy.

We need to make sure it is explainable and it is fair and unbiased. These are things you have to build into the whole chain from data to monitoring all the way across and monitor your deployment going forward. The interesting thing about AI, and especially when we come to large language models, it is not like old software development or pretty much every other project that has a start and an end, and then you’re done.

And once it’s built, we can walk away. No, we still have to constantly be there, monitor it, retrain it, and make sure that. Even though it was ethical and responsible at the beginning, it does actually stay responsible going forward.

Brian Thomas: Thank you. I appreciate that. It’s absolutely right. You need to understand your data.

Sure, there’s no bias in there. And when you’re building this out, you may wanna ensure transparency, have a clear plan and framework. And as you mentioned, I think it’s important from start to finish, right? That’s not it. You gotta continue to monitor, improve and, and make sure that the outputs are, continue to be clear, transparent, and non-biased.

Eva, last question of the day, being named the World’s Best Data Scientist in 2020 and the 10 most influential Women in technology in 2021, it’s a testament to your impact. How have these accolades influenced your work, and what message do you hope to convey to aspiring professionals in the tech industry?

Eva Marie Muller-Stuler: The impact was, I think that you get bigger and more interesting projects that you get known and more trusted for what you deliver. But on the other hand, you still do have to deliver every single time. You still have to retrain and yourself constantly to stay on top of things. It’s not something, an AI 20 20, 20 19 and so on look very different to what it looks now.

So it is a job that keeps on changing and that’s actually what makes it so exciting. I think most of all was I think it changed my confidence. It changed my confidence in meetings when I didn’t understand anything to just say. Hold on. If I don’t understand it, probably nobody else understands it, so I just kept on asking questions.

The projects became bigger and more interesting, but it’s really nothing. You can relax on the bets you get at one point in the time for things you’ve done before, but it does definitely mean you have to keep on going, and I think that is the biggest. Message I can share with everyone. We are not just in ai, pretty much in every single field, in every industry you work.

We are in a time at the moment where things are changing so rapidly and so fundamentally that what we learned yesterday might not be valid tomorrow. And the Times are finishing university and saying Thank God school and saying, thank God I’m done learning. They are over. We have to constantly. Relearn, train more, do more crosses, read new publications and, and stay on the ball.

Because only then, and when you’re confident and you actually know the topic and you actually know what you’re talking about, you can have an impact.

Brian Thomas: That’s amazing and I really appreciate that. You know, we have a lot of people in our audience from every age group, every demographic, male or female, and I think it’s important, your message here, obviously your persistence is key.

It helped with your confidence, obviously, and you mentioned something, you know, just because you hit a goal or a milestone or you get an award, an accolade. You still need to push forward, learn, aspire to the next goal and keep growing. And I think that is so important we need to share with our audience today.

So I appreciate the message. Eva, it was such a pleasure having you on today and I look forward to speaking with you real soon.

Eva Marie Muller-Stuler: Thank you so much, Brian. I really enjoyed our conversation.

Brian Thomas: Bye for now.  

Eva Marie Muller-Stuler Podcast Transcript. Listen to the audio on the guest’s Podcast Page.

Subscribe

* indicates required