John Thomas Foxworthy Podcast Transcript
John Thomas Foxworthy joins host Brian Thomas on The Digital Executive Podcast.
Welcome to Coruzant Technologies, home of the Digital Executive Podcast.
Brian Thomas: Welcome to the Digital Executive. Today’s guest is John Thomas Foxworthy. John Thomas Foxworthy is the founder and CEO of the Global Institute of Data Science. and the Fractional Chief Artificial Intelligence Officer at Turing Forge in Los Angeles. He completed his first machine learning project in 2005 and has worked on dozens of machine learning and artificial intelligence projects in various industries over the past 20 years.
John teaches artificial intelligence and machine learning to working professionals at Caltech in Pasadena, California and remotely at the University of California, San Diego. He has a master of science and data science from Northwestern University, and his master’s thesis was on deep learning artificial intelligence.
Well, good afternoon, John Thomas. Welcome to the show!
John Thomas Foxworthy: Thank you.
Brian Thomas: Absolutely. Love you joining today. You’ve done a lot of podcasts from California. I’m in Kansas city, you’re in Los Angeles. And again, I hope you and your family are safe with all the fires, but I certainly appreciate you sneaking out to make a podcast in the middle of all these fires.
So thanks again, John Thomas.
John Thomas Foxworthy: Thank you very much. Yeah, our family’s safe.
Brian Thomas: Good to hear. So, John Thomas, I got some exciting questions for you today because we really want to dive into AI like we’ve been doing on this podcast now probably for a year and a half. But let me ask you, what is human consciousness, and does it exist in artificial intelligence?
John Thomas Foxworthy: Great question. Consciousness is basically, let’s first define it. It’s a self awareness. And it’s also the integration of information of self awareness. So in other words, it’s the ability to recognize oneself as an entity, subjectively only, not objectively. Humans and animals can only do this right now, not machines, not, not even close.
Oh, you also have to have the ability to combine different inputs to be conscious. So, for example, audio files for conversations, image and video files for reading someone’s face as they talk to you, and so on. So, unfortunately, we cannot identify where consciousness happens in the human brain, even though we experience consciousness.
So consciousness is also not binary, it’s not a black or white issue, it either exists or does not exist. It’s really a full spectrum, hence someone saying that they’re half awake, which is another way of saying that they’re, um. semi conscious. But I would say, um, I would give an analogy here to kind of help, uh, your audience in that, you know, a thousand years ago, human beings believed that the world was flat and said to look towards the horizon.
Obviously when you walk, it’s flat. I mean, why are you denying my experience? And of course we know today that the earth is not flat at all. It’s round. So same could be said about consciousness. We can experience and identify consciousness without measuring it. Or knowing where it comes from and, uh, more importantly, human beings pretty much have no consensus about what exactly is consciousness, especially within the scientific community from my experiences at Caltech and UC San Diego, but what the artificial intelligence industry is doing with consciousness.
It’s more marketing than science. Since the science is not settled with consciousness, then some people, not all are taking advantage of selling their products and services. So the amount of regulation of AI products that we currently consume there’s regulation is little to none. So if you say that your product is conscious, you’re more or less not likely to be sued.
I mean, there’s a possibility, but it’s a low probability. So you have every financial incentive in the world to draw customers by exaggerating your product’s consciousness as a novelty. As an emotional appeal and whatnot, so you can charge a higher price. And if it’s not a higher price now, you can charge it later.
And if it’s not neither of those two, it’s basically to drive your stock price of your company as high as possible, more important than anything else. Another thing I would say about this is that. If you say your AI product is conscious, then when it produces errors or biases, you can just say it’s learning, you know, like a little kid so you can avoid responsibility.
There’s that as well. So as long as humans input data into machines, they can only mimic humans. So theoretically speaking, how does machine consciousness can exist? I would say this is how it happened. Machines would have to be involved from start to finish with no humans. So in other words, machines would be collecting the data, machines would be processing the data, machines would be writing the code, and then machines would be implementing it.
So there’s no absolute requirement for a human being to be in the loop for machine consciousness. And lastly, I would say not to mention the most obvious problem is that it’s not feasible for machine consciousness. We simply don’t have the energy capacity to process Audio, text, video, and all sorts of data, just like the human brain does.
We’re not even close. We’re several decades away. So above all, machine consciousness to truly work, we would have to have AI agents to discover on their own, not what human beings have already discovered.
Brian Thomas: Thank you. I appreciate your perspective. There’s a lot to impact there, obviously, especially if you want to get into a real philosophical or theoretical type question around the human consciousness and when that happens and, and when will machines, if ever, be able to adopt that particular, I guess, quality.
That’ll be challenging, but you’ve shared quite a bit about that. And, and again, how people can say that their product is this or that or tout that and not may or may not get sued, but I just love your perspective on this. So I appreciate that. Uh, John Thomas, do you have to have a PhD in quantitative subject like computer science to get into AI?
John Thomas Foxworthy: No, not at all. Absolutely not. This is just an industry bias to overpay someone who’s over educated. So according to some data surveys, if you have a PhD with little to no real world experience, you can start with a salary of 150, 000 a year. Like any population, some PhDs are good and some of them are Quite frankly, a nightmare.
Like, like any type of population. Hiring a PhD to do data science. Let me give you an analogy. It’s basically, um, fishing with dynamite. You’re throwing sticks of dynamite overboard to find fish. But if you knew what you were doing, you’d, um, use a fishing rod. So that’d be my way of explaining it. There are real world data science bachelor’s degrees that exist explicitly and directly.
They exist at the University of California, San Diego, University of California, Berkeley, University of Michigan, University of Rochester, University of Virginia, off the top of my head. But let me kind of add more to your question by Kind of explaining the requirements to get into AI. So I have a real data science master’s degree from Northwestern University.
So I have on my transcripts, the words artificial intelligence, natural language processing, deep learning, and so on. So I can tell that your audience that a real data science bachelor’s degree would have less coding, not more than a computer science degree. Computer science is not science. It’s applied engineering.
If you took 1, 000 computer science majors and you asked them, what is science? And what is the scientific method? You would get a thousand answers. The exception would be Stanford University’s computer science department, which is also a data science degree, but they are by far the exception. You know, you’re explaining one or two percent of a population.
On the other hand, data science is interdisciplinary. So, uh, and its central core is statistics. So data science, machine learning, and artificial intelligence has linguistics, like ChatGPT’s computational linguistics product, psychology, like recommendation engines with reinforcement learning, and so on. A little bit more on this, I would say that a bachelor’s degree in math, Has more math than a bachelor’s degree in data science.
Don’t get me wrong, there is math in data science. There’s linear algebra, probability, statistics, some calculus. But not as much as a math major, not even close. But what is the most important is the ability to communicate. Write and express yourself as a data scientist much more than a math major and much more than a computer science major, especially when you’re doing storytelling with data.
This is critical. I’d say the best substitute for a PhD is an actual data science bachelor’s degree. Or degree in statistics, not math, not engineering, not physics, but specifically statistics because the majority of equations in data science and machine learning comes directly from statistics before machine learning became a word.
It was called statistical learning. They just changed the name for marketing purposes, not to mention the term data science. It originated from William S. Cleveland in 2001 at a academic conference on statistics. Professor Cleveland has a PhD in statistics from Princeton University. He’s currently a professor at Purdue University in Indiana today in the Department of Statistics, not computer science.
Brian Thomas: I love that. Got a little bit of a history lesson as well, but I appreciate that, and I didn’t even know, as you explained, the degrees and what places you were and what kind of makes you somewhat of a residential expert. I really appreciate that, and that’s helpful for our audience as well. John Thomas, do AI projects always require substantial coding skills?
Is being a code warrior a requirement to get into AI?
John Thomas Foxworthy: No, not really. So this is the major problem that kind of exists. within the, the world. So in the year 2005, a professor at the University of California, San Diego, did a recording on data mining to get into forecasting using random force algorithm. And she used a, a no code vendor software platform that comes from New Zealand called Weka, W E K A.
And these no code solutions have existed for quite a while. The problem is that I have to explain the industry bias that exists within the computer science and the big tech world in that I, by the way, also had to take over that course and rewrite the video recording, rewrite the content, and update it from 19 years later because I did it last year.
which was hilarious. But um, there are no code platforms. There are about two dozen of them. Some of them are free. Some of them are quite expensive. It depends on your technology, infrastructure and environment. It is possible for someone who’s not technical to get into machine learning and artificial intelligence.
I think this is very critical if you want to get into um, AI product management and, uh, because if you can, um, run some code without actually doing any manual coding, that would be quite helpful to kind of, uh, go to market faster and be able to reduce your operational risk and go for it. But probably now is a good time to kind of explain the industry bias.
I’d say about 10, 15, 20 years ago, big tech firms like Google and Microsoft spent a lot of money, donated money to departments of computer science, boot camps to get everyone to do manual coding. They hired actors for this, you might have seen them. And what’s happening now is that ChatGBT is slowing all that down and making it unraveled.
And what’s happening with these ChatGBT is that it’s, they have all these sunk costs involved with this manual coding. And it does very much so threaten their existing products. So, for example, right now, Google Bard was decommissioned into Google Gemini. It was a competitor to ChatGPT. It failed because of technical reasons.
But what has happened is that these big tech firms could not get into text based artificial intelligence. Because it threatened their existing products. So right now, it’s happening in very big tech firms, and all this investment into manual coding, you have to be a code warrior and all that stuff, it’s unraveling, and it’s also threatening their, their legacy code bases of products that they coded up, let’s say, 5, 10, 15 years ago.
And, uh, this really comes from, uh, ChatGBT. And, uh, not to mention, Microsoft tried to launch a large language machine equivalent to OpenAI a couple years ago, called LUIS. It failed. So, this is really, uh, based on not people’s abilities, but really the bias that Big Tech has, their investments that they put forth.
And given your audience the impression that you have to be a code warrior, the truth is there are two dozen no code solutions. There’s several that I can recommend for your audience. They’re relatively free. Some of them are, um, obviously.ai, that’s one of them, nine. It’s a German one, K-N-I-M-E that’s relatively free.
It’s got a free YouTube as well, so you can get into this space without any coding.
Brian Thomas: Love that. You know, it’s kind of like you’re exposing the industry’s dirty little secrets in a way. And I don’t mean that in a bad way. Just it’s great to have somebody with your knowledge and your insights to this and really helping our audience, so I appreciate that.
And John Thomas, last question of the evening, if you could briefly share, are only big tech firms like Google, Microsoft, and Netflix good at AI?
John Thomas Foxworthy: I would say that startups and big tech are good at AI, but it’s complicated. So let me unpack this. I mean, most talented AI people on average work at startups, not at big tech firms.
For various reasons, the rest of us work mostly in various other companies and consulting. However, there are single individuals at big tech firms who are some of the best in the world. No argument there. But does that excellence contributes to the rest of the organization? No, no, not at all. Two things can happen at once because big tech is big.
There’s plenty of room and there’s several averages about people’s abilities. And not to mention, I’d say that there’s a good way to kind of explaining this, uh, from another angle is that AI startup talent is in a complex ecosystem. And what I mean by that is that they’re trying to balance their independence as a company, but also depend on the infrastructure of big tech cloud products like AWS and Google Cloud Platform, et cetera.
So AI talents likes to. Focus on small firms rather than big tech, because there’s more freedom, there’s more opportunity to deliver more products, you can take more risks as a small startup, you can be more flexible and agile within your decision making, not to mention you can be closer to your customer for immediate feedback.
These are all very important. But for big tech and their talent, they have a massive bureaucracy. They might have micromanagers that could stifle your innovation. Not that that’s always true, but it does happen. And Big Tech has more, um, financial resources for research and development. However, there is a lot of groupthink involved that kind of stifles this critical thinking in developing a viable AI product within Big Tech.
So I’d say, I’d also kind of answer this question with a little bit of a forecast. I think within this decade, multiple things are going to happen. It’s a possibility that a Big Tech firm may acquire another AI startup. Totally possible agree with that. But at the same time, I think that AI startups will remain independent to kind of capture even more market value, you know, going from 5 billion to 50 billion valuation that has happened last year with an AI startup, because the AI industry is constantly changing, which means you can capture even more talent.
So I strongly believe that one day, it’s One AI startup will be as big as Google or Microsoft. It will be valued at 1 trillion and it will remain independent and have an outstanding talent of AI staff.
Brian Thomas: Amazing. Thank you. And I appreciate that you shared quite a bit on that particular question. And you’re right.
I’ve had some of the founders of some of the smaller AI development startups on this podcast. And, uh, same exact message, essentially a much more agile, much more nimble, be able to make decisions, move quickly without all the red tape. Of course, they struggle with what the big tech firms have is the deep pockets, obviously.
Appreciate you impacting that. That’s so appreciated. John Thomas, it was such a pleasure having you on today and I look forward to speaking with you real soon.
John Thomas Foxworthy: Thank you, Brian.
Brian Thomas: Bye for now.
John Thomas Foxworthy Podcast Transcript. Listen to the audio on the guest’s Podcast Page.