Please ensure Javascript is enabled for purposes of website accessibility
Home TRANSCRIPTS Peter McAllister Podcast Transcript

Peter McAllister Podcast Transcript

Headshot of Peter McAllister

Peter McAllister Podcast Transcript

Peter McAllister joins host Brian Thomas on The Digital Executive Podcast.

Brian Thomas: Welcome to Coruzant Technologies, Home of the Digital Executive Podcast.  

Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand

Welcome to the Digital Executive. Today’s guest is Dr. Peter McAllister. Dr. Peter McAllister is a veteran of innovation. Having worked in the mining, healthcare and supply chain industries most recently as a data and AI business leader and implementer, this informs this exploration of these themes. So, “The Code” challenges senior leaders to reflect on their role in shaping technology. Demanding foresight and responsibility in navigating the complex landscape of AI. 

Dr. Peter McAllister’s book “The Code: If Your AI Loses Its Mind, Can It Take Meds?”, explores a gripping narrative that resonates deeply with the challenges faced by today’s leader and technology and innovation.  

This compelling sci-fi novel invites leaders to consider the balance between technological advancement and ethical accountability. As it follows, Liam, a seasoned engineer tasked with developing nanobots for asteroid mining a frontier that promises to revolutionize the industry while posing significant risks.  

Well, good afternoon, Peter. Welcome to the show.  

Peter McAllister: Thank you. It’s a pleasure to be here.  

Brian Thomas: Absolutely my friend. What’s more important, and I always highlight this, I love doing international podcasts. 

You’re in Melbourne, Australia today, actually, you’re a day ahead of me. It’s Friday for you and here, it is Thursday for me. But I just appreciate that I’m in Kansas City and I just love to highlight guests from all around the world. So again, thank you from the bottom of the heart. Peter, I’m gonna jump into your first question. 

You’ve had a fascinating career spanning in mining, healthcare and supply chain innovation. Can you walk us through your journey and how it led you to become a data and AI leader and now an author?  

Peter McAllister: It’s an interesting question. Sometimes I wonder what the answer to that, to that is myself, so. I like to live at the collision between people, business and technology. 

And that started, I in the technology field. I started writing computer programs about the age of nine or 10. Carried on down that, that field with the love of computing and also the love of science. Ended up getting a PhD in biotechnology, which had a whole bunch of mathematical modeling included in that, and then tried to work out where I could have the best impact in the world. 

At that collision point between people, business and technology is where I ended up starting off in the mining industry, looking at ways of trying to minimize the environmental impact of the industry and what we could do to lessen the footprint in terms of what was happening in that space. 

Then decided to turn left. And went into the healthcare industry, looking at computer systems that were supporting general practitioners and specialists and rolling those out and implementing that in one of Australia’s largest health companies. And then from there, took that expertise into their pharmacy wholesaling space, which got me into the logistics side of things and the supply chain side of things. 

And from there it became very much. There must be something smarter we can do here. There must be something better we can do here. There must be a different and a better way of driving things forward. And that got into the boring old fashioned data science, data analytics side of things. And then now into the more ai and what can we do to go from looking at the world through the rear vision mirror? 

Which is what a lot of the data science part of the world does to looking at the world through the windscreen and seeing what’s out there moving forward. So that’s been my journey from the kid that program computers through to the person that’s saying, okay, we really need to make sure we’re using these technologies. 

Moving forward in such a way that as a business we can get value from it and we’ve gotta bring people along on that journey. ’cause if you don’t bring the three together, you don’t tend to move forward. So that was, that’s the professional side of things in the writing side of things, I started writing effectively as a way of getting all the crazy thoughts out of my head. 

So, it was very much a, hey, this is an interesting concept. Let’s write it down and explore it. Let’s write it down and explore it. After doing that for a couple of years, I started thinking, well, actually, rather than a whole bunch of disjoint stories. Maybe I should write one that brings it all together and combines the passions that I have. And that’s led me to where I am working in the data science and AI space and logistics and writing science fiction novels in my spare time.  

Brian Thomas: That’s awesome. I think that’s very much parallel you and I here, but a lot of technologists love their sci-fi and I think you’ve just found that right passion. 

But you know, going back, I love what you said, you live at the collision of business people and technology and you really kind of stepped through all that. But I love your amazing backstory from your early youth. Getting involved in technology eventually growing up and obtaining your PhD and, and going into data science, and now AI obviously. 

But again, I just love the passion for what you’re doing and getting into writing these fictional books as an author. I think that’s awesome. So, thank you. And Peter, your book, “The Code, If AI Loses Its Mind, Can It Take Meds?”, blend science fiction with real world concerns. What inspired you to tell the story through a fictional lens rather than a traditional business or technical book? 

Peter McAllister: The best way of answering that is as a human and as a part of a human society. We’re a storytelling species. We tell stories. We’ve always told stories from the beginning of our ability to communicate. You look at the way people have told stories about what’s in the sky, and different cultures have come up with different stories about constellations and those kinds of things. 

So, we’re a real storytelling species. Fiction allows you also to explore a lot more what ifs and crazy ideas that people will accept and listen to and let you take them on a journey. Whereas in that more traditional, technical or, or business kinds of world, that’s nowhere near as, as strongly accepted, people want that, that rigor, that containment and that understanding of, of where that’s going. 

I guess also Arthur Clark’s a bit of, a bit of a hero of mine and he managed to, to spin that, that blend between writing technically and also writing science fiction. If you look back at probably his first known publication, which was about 1945, which is where he expounded the idea of communication satellites. 

The estimates are that maybe 10,000 people read that article and. It made a massive, profound change through a technical network to what’s happening in the world. He also wrote 2001 “A Space Odyssey”, which sold 15 million copies. I think it was the movie grossed about $160 million in 1968 currency. So that’s about one and a half billion dollars now. 

So, he reached a much, much larger audience and a much more generalized audience and changed a thought process there through that medium of fiction. So, from, from that perspective I like to, to work in both. The other way of looking at it is I’ve got a relatively dark sense of humor and I couldn’t otherwise let that go run wild in a business context or, or a technical book. 

But in the, in the code of being able to get out there and get relatively dark and relatively sarcastic about some of the things that are going on in the world.  

Brian Thomas: Thank you. Really do appreciate, again, the backstory and, and why you wrote the book and, and the route you took obviously, and I totally agree. 

This is what podcasting is about. Also, humans are storytelling species and everybody has a story, which I really love and that’s why I love doing podcasts. But, do you, took that route of going the, again, fictional, because you said it gives you that more, that more leeway with the story, builds interest with the audience, et cetera. 

Things can actually be a little bit more plausible to a larger audience. Again, the writing style that you took here. So, I do appreciate that, Peter. The next question here for senior executives navigating AI adoption today, what does responsible leadership look like when the technology’s evolving faster than governance and regulation? 

Peter McAllister: I’ll split that into, into two parts. If you look at what’s happening in the cybersecurity world, some of the best AI minds in the hacker community, and if as a senior executive worried about cybersecurity, you are not focused on where that world is going and what it’s doing and its ability to impact you, then your. 

Not necessarily gonna be on top of the, the kinds of threats that are gonna come to you as an organization, as that senior leader. Chances are you’re not necessarily gonna understand the technology, not necessarily going to be across all the things that can happen, but you have to take on board a great degree of trust in your technical people. 

That the advice that they’re giving you and the direction that they’re giving you is the best for your organization. So in the, in the security world, it’s very much this is going, the bus has has definitely left the the, the station with the hacker community. So, if we are not on the next bus chasing it, it’s gonna get very messy very quickly. 

On the other side of it in terms of how you’re using it in the business. What we’re seeing at the moment is this, this mantra of using generative ai, using agenda AI as quickly as possible to try seeing a lot of CEOs and CTO we’re gonna use it for in our business. Quick, give me a business case. There’s a lot of pressure. 

Agent space, the biggest risk I’d be flagging to those senior leaders. If you don’t use it wisely or correctly, there is a significant chance you will make your organization dumber. By that, I mean, there is a lot of effort saying, let’s automate these processes. Let’s bring AI in to run these processes, and a lot less effort on is this actually the right process? 

Is this the best representation of the process in our organization? Or are we codifying a solution that is a workaround upon a workaround upon a workaround, upon a workaround? Putting that into a, a, an AI framework and then forgetting about what actually happens in the business and having the people in the business that understand those processes. 

So the real message back out in, in that space is, yeah, this is a great tool, but you’ve applied it to things that are already good to make them better, rather than them applying them to your biggest problem that may be lurking around. ’cause you run the risk of codifying bad business process and then it’s even harder to change and even harder to improve from that perspective, the ethics side of things and the accountability side of things. 

Most of the security and governance teams that that I see and work with on a day-to-day basis are well aware of the kinds of threats that can come from this and they will feel like a very heavy hand break. On trying to make these adoptions and trying to make these changes, you need to listen to those but not be bound by them because there are situations where you need to take that risk. 

Or there are situations where the security team says this is a risk that really isn’t. They’ve looked at it from a technical, but not from a business perspective. So. Overall from a an AI perspective into the organization is you need to remember that you have business processes first, and the technology helps you execute those better, execute them faster, execute them more efficiently rather than let’s just take what we’ve currently got and put it in a box. 

Brian Thomas: Thank you. Really appreciate that. You’re absolutely right. We gotta have those good business processes up front and we can’t just assume that AI can do everything for us. That could lead us down a bad path. But taking something good and making it better using the technology is, is a great idea. On the security side of things you’re absolutely right. 

Some of the best AI minds out there are in the hacker community, and we need to. Be well aware. Senior execs need to be tuned into this. Have the right folks on their team, of course, but they gotta be aware of this rapidly evolving AI and threat landscape. So, thank you, Peter. The last question of today, as we look ahead, what is your perspective on the future of ai? 

Do you believe we are heading towards a world where systems operate beyond our control, or can we build a future where innovation and ethical accountability coexist?  

Peter McAllister: That’s a great question and I’ll answer it by talking about an analogy. So if you look at the way automobiles came about and came into, into, into general use around the 1890s, you could get a car. 

It was very expensive, but if you went to drive it on a track or a road, you needed someone walking in front of you with a red flag. You had a speed limit of four miles an hour. The person had to wave that red flag and in some states they had to blow a horn as well to alert people that there is something dangerous on the road, that there’s something coming down there that could cause them harm so they can take shelter or or otherwise. 

The last of the red flag rules, I think, was taken out of existence in the UK in about 9 18 96. Roll forward, five human generations. The motor vehicle is totally and utterly embedded in our society. It is out there. Without it, we would not function. We would not have grown. We’ve not to where we are at the moment, but it also currently takes around a million lives a year in automobile accidents and, and those kinds of things. 

So effectively, we’ve made a trade off of having the value of the, the automobile in our, our society for a cost of a roundabout. A million people a year. Now, I don’t think anyone sat down and worked that out and made the trade off. There wasn’t a summit in 1905 that said, this is how many people were, were gonna be happy dying as a result of this particular tool. 

So, this is the kind of boundaries that were put on it. But we have over that period, put in road rules and speed limits and airbags and all of those kinds of things to, to try and mitigate it as a society. That bargain that says the car, 1 million people. Right now we’re in a situation of, I think we’re at that red flag rule space. 

Anytime you use a chat gt p or, or similar, you’ll get that little red flag warning at the bottom, which says, this might be complete rubbish, or this, I don’t think they use quite that terminology all of the time. So we’re at that stage now. If you look at it in the, the evolution of the automobile of the the red flag rule space, this is where we start to make decisions about the value versus the impact on us. 

The motor vehicle that took five human generations that wasn’t subject to Moore’s Law, which will make it move a lot faster and outward. So effectively we could go down that path where AI basically starts to run, I’ll, I’ll put it, generic farm humans in the sense that it, AI and its use in the society and in the world can take over pretty much the majority of the roles and our role in the economy becomes to consume. 

’cause there’s, we don’t need to produce anything. We don’t need to monitor anything. We don’t need to do anything else along those lines. The technology is doing that for us. So that’s, that’s one end of the one, one extreme. And of course the other extreme is the matrix, where if effectively we end up in a situation where we are either pets or subjugate and buy that particular beast. 

Where do I think it’s gonna go? I think in the end of the day, humanity ends up with this 51 40 9% split between good and exploitation and proportions may vary. Sometimes we’re 51% good and sometimes we’re 40, 49% good. So if we don’t sit down and think now about where we want to be in 50 or a hundred years time in interacting with this technology, we will be on the journey with every individual decision being made being another correction point in that particular journey. 

So it’s probably not the best or probably the most exciting or the most. Optimistic view of where things are going, but I think we are in that situation where we need to make some decisions about where we want this to go, rather than being driven by a series of decisions as time goes by. That may take us down a path that we’re not happy with. 

Brian Thomas: Thank you. Really appreciate that. And you outlined that beautifully. You talked about that red flag rule space analogy with cars. And over time as cars were part of our everyday lives today around the globe, I think you said approximately a million lives are lost per year in automobile accidents, which totally makes sense. 

And of course we have a lot of safety rules and, and we’re bringing AI into cars to make them even safer. But we still have that to deal with. That aligned with what you said with AI and how it needs to be governed. Obviously we could end up like the Matrix. We all know that trilogy very well, but we do need humankind at the front of this to make sure that we haven’t made a bunch of series of bad decisions. 

So again, I appreciate your insights on that. Peter, it was such a pleasure having you on today, and I look forward to speaking with you real soon.  

Peter McAllister: Thank you very much, Brian. I really appreciated your time.  

Brian Thomas: Bye for now. 

Peter McAllister Podcast Transcript. Listen to the audio on the guest’s Podcast Page.

Subscribe

* indicates required