Charles Yeomans Podcast Transcript
Charles Yeomans joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, Home of The Digital Executive Podcast.
Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand.
Welcome to The Digital Executive. Today’s guest is Charles Yeomans. Charles Yeomans is the co-founder and Chief Executive Officer of Atombeam Technologies, bringing over three decades of executive leadership experience spanning investment banking technology, private equity investment, and mergers and acquisitions.
Throughout his career, he has led multiple organizations include Tri Media Portal Group Holdings, where he served both as President and CEO. A seasoned entrepreneur. Charles has successfully founded and scaled companies including major insurance brokerages. Prior to his business career, he served as a US Navy intelligence officer.
He holds an AB from Kenyon College and an MBA from Stanford University.
Well, good afternoon, Charles. Welcome to the show.
Charles Yeomans: Same here. Brian. Glad to be here.
Brian Thomas: Absolutely, my friend. I appreciate it and making the time. Traverse two time zones. You’re in San Francisco today. I’m in Kansas City. And that’s a huge deal to me.
You’re not in my backyard or I’m in your backyard. So, this is just again, a heartfelt thanks to you for making this work. And Charles, if I could, I’m gonna jump into your first question. You’ve had a diverse career spanning military intelligence, investment banking, private equity, and entrepreneurship.
What experiences shaped your journey to becoming CEO and co-founder of Atombeam Technologies?
Charles Yeomans: Yeah, it’s like, it’s, like many folks, I’ve had an interesting sort of like turn left, turn right and you never really know that well. You do sometimes know that this is a big deal. I’m making a big change here.
But a lot of times you don’t. Like one of ’em was like when I was in the Navy, I was I was doing really well. I was a briefer, went over to the White House for intelligence briefings I gave and so forth. And it was a great experience. And then it, I said I was, I had always planned to go to business school after that, but it got to be a very difficult decision.
And I said, okay, I’m only gonna go if I get into Stanford or Harvard. And I got into Stanford, and I sort of just decided, okay, that is what I committed to do, so I’m doing it. But it’s one of those things, and, another was leaving investment banking, going to be an entrepreneur. I just started to feel like, okay, I, I’m not cut out too.
Just I’m too restless. I have to think about things too hard. I get fascinated by technology. And so, I just really started drilling down and doing things, and that turned out to be the right choice for me. And I’ve ever since then been, after I left banking, investment banking I become an entrepreneur.
Had private equity backed companies. I ended up being CEO of and as well as other companies that were startups of various kinds. So, it’s a, it’s an interesting thing and if you. If you love to learn about new things and you feel strongly that you have a vision for how things work, I mean this, this is the life for me.
Brian Thomas: Thank you. I really appreciate that. And I like your commitment, your steadfast, you had a diverse career. You were curious about your career and you had a great one from your military, including the White House. But whatever you did, the message I took away, we were determined and committed to whatever career education you set your eyes on.
And I thought that was interesting. And again, very diverse. But. I, the, the last thing I’ll say here, just to highlight what you said is just really love to learn about new things and you went after it and I know there was a lot of hard work behind that. We all get that as entrepreneurs, but I really appreciate the backstory of where you started and where you ended up.
And Charles, Atombeam is focused on transforming how data is processed and transmitted. What core problem in today’s data infrastructure inspired the company’s creation.
Charles Yeomans: So that, is really interesting points. It’s like, the fastest growing category of data is the data that machines generate.
Usually called IOT data, internet of things, data, but that includes a really, really big chunk of the data that’s out there. And it is the fastest growing and ev it’ll even be by 2030. The expectation is it will be bigger. The biggest category, even bigger than video. And that’s because there’s so much of it.
I mean, you think about anything like, think about a car. It used to be a car would, even not send any data, then send a little bit, once a day. And now it’s like the cars generate 25 gigabytes an hour. And but the capacity of the network to send that is about 1%. And so very little moves, even though the card makers would like a lot more of it, but it is expensive to send it.
There isn’t that much capacity. And that’s just one example. There’s many, many, I mean, you, you take all the the new ideas about sending, being able to use a cell phone over directly on a satellite. That is going to overload the satellites almost certainly, which means that you need a ton more capacity there than is really practical.
And so when you boil it all down, there’s tons of this, this machine generated data inside of data centers, data moving around everywhere and how, and it is the fastest growing category, but it’s also the capacity of networks to handle that is growing at about half the rate. The data is. That means that, the problem now is a lot of data just doesn’t get it to where it needs to go.
And so they end up sampling the data to say, okay, we, we wanna make sure that this oil platform doesn’t fail. And so, but it can handle about, I don’t know, it’s like maybe two or 3% of all the data that’s generated fairly typical. And that means things fail. So how do you make it better? How do you send more data over the same network without having to replace the whole network?
I mean, you can’t like go out and launch a bunch of satellites. Reasonably well you do, but of course, but you know, doing that and it’s is expensive compared to software and so our thinking was okay, if we can. Generate a software. That’s what it does is that it compacts, we use that word ’cause it’s different than compression, but it compacts a stream of data.
In other words, it’s not dependent on some pretty big file like compression is These, these messages that a machine generates are too small for compression. So what we do is we, swapping out the actual data for little code words out of a code book and the consequences we can send usually on average, about four times more data through the same network, which means that in many cases you don’t have to replace your network.
You just install what we call our product called Neurpac in that network, which means that that network is now dramatically more, cap has dramatically more capacitive. For instance, like, like, Ericsson, one of our one of our partners, we did a webinar with them last week, and they they’re all over this.
I mean, think about like, they make these things called gateways, which are a way of collecting data from a bunch of sensors and then sending it on to the cloud or somewhere. And what’s the point? The point is to send data. Well, if you’re Ericsson and you install Neurpac, you can send four times more data through the same bit of hardware, so that makes it a better, better, gateway.
And so, we can address all this stuff with our Neurpac product and literally transform the world and how much data moves. It’s by changing the fundamental basis of how data is. Is, is still very usable, readable, but it’s way smaller and that changes everything in the world. It’s a fundamental thing.
Brian Thomas: That’s awesome. Thank you for sharing. And I didn’t know the fastest growing category of data. Is that data being created by or? Machine learning and you said estimate to grow bigger than video in the future, which, gosh, we know video is just massive, so this is interesting.
Yeah. So, thank you. And the capacity of these networks obviously just cannot keep up. Typical network, you said it’s only growing the technology, only growing at half the speed that the data is being generated, which is gonna create an issue. But thankfully, your platform Npac can send up to four times more data through your typical network.
Again, you’re solving a, a big gap here really to make the world a better place, and I appreciate that. So, Charles, if we switch gears data efficiency, let’s go a little bit deeper into this and optimization are becoming critical. As AI and connected devices scale, how does Adam Beam’s technology help organizations handle increasing data demands more effectively?
Charles Yeomans: Well, it’s a lot of it, so part of it is like what we were just talking about with being able to say. Without having to say like, let’s take ViaSat, one of our partners. Their satellites cost about half a billion each. They’re geosynchronous big, big honking satellites. And those things are, are a real, are tough to go up and say, okay, I’m just, this is not like a Leo thing at Starlink.
You gotta. Serious piece of iron up there and you can’t just sort of swap it out or you can’t like drive up and upgrade the hardware on it. So now you can suddenly with a bit of software that instead of launching two or three more satellites to add to your network, if you are ViaSat, you can just use Neurpac.
So, it’s a massive impact on this thing. The other thing I wanna mention here is like, is LLMs. So, you think about LLMs and how they scale, which is badly. And the reason they do is because fundamentally they never learned anything. What they are is they’re trained and then once they’re trained, they are deployed and people can use ’em.
But the problem is that once, once they’re trained, the weights, which is what is the inside of these transformer, massive transformers are frozen. They absorb basically all the information in the world, and then after that they stop learning. They can’t learn anything new. And so what that means is.
When I ask an LLM a question like, what is a golden and retriever, it’ll go through the same process the first time I ask it the second, the third in the millionth time, so that when you get to a point. You know it, which you say, alright I’ve gotta know more than this thing. No, I have to, this thing should already understand this.
It should not have to do every single, computational step because it should know what a golden retriever is. And so our new ai, which we call the persistent cognitive machine or PCM. Is a totally different architecture. And instead of the approach that an LLM uses with transformers, it uses a different fundamental basis.
And that allows it to learn and to be able to by learning, become way more efficient. The first time it’s, it learns what a golden retriever is. It goes through a process. And it, it isn’t as giant and inefficient as an LLM, but it’s big. It’s more, it’s still not as efficient as it would be the second time.
The third time you ask, it’s like a human brain. We don’t have to be told literally millions of times what a golden retriever is or. Look at millions of pictures. It knows. It, we learned after a couple times and then a few more times, and it gets better and better. PCM is the same thing. It becomes more efficient.
And the other thing is there is an enormous need out there for no hallucinations. Hallucinations is a huge problem. And for instance, like, we’re talking to DARPA right now. Is very keen on our AI. And the reason is they can’t have hallucinations. Zero. I mean, they, you can’t make a mistake and, fire a missile attic, cruiser at a crew at a cruise ship or something, I mean, you can’t do anything wrong.
And so, our PCM has five checks to say, is this real? Is it right? Is it true? Not only that, it goes beyond an LLM. An LLM is instead of like with an LLM, it’s statistically predicting what the next word is and it has context and it’s got, it’s enormously powerful. I mean, I’m not denigrating LLMs here, but they fundamentally are predicting the next word, which means that if I go.
The dog ate the, and asked it to fill in probably whatever it is, 60% of the time it’ll say dog food, but maybe 1% of the time it’ll say the little boy and so it zooms off in a direction and you could end up with all this stuff that’s totally wrong. And so, ’cause it has no cognitive understanding, whereas the PCM is on the basis of its architecture.
Just like our brains do, it really understands what the dog ate. When the dog is likely to be eating within a reasonable, it’s not gonna say the dog ate, the house or some completely wrong thing. It’s going to know that’s not right. And so that, and there’s some other things about it that are absolutely critically important to understand, to have in future ai.
AI doesn’t need our AI doesn’t need a data center. Even it’s dramatically more efficient. And so I think what we’re gonna find here is it’s going to evolve that, the architecture, you, I mean, we have 207 patents on this architecture. Not all of ’em are issued but we have a lot of patents and we think, and so does Darpa.
That our approach is the wave of the future. So it’s going to, it’s going to change a lot of things. ’cause right now the inefficiency of LLMs is just over the top. It’s crazy. We, we are running out of power or running out of water and, making, make as many GPUs as you want.
But if you can’t power ’em, you know that’s a problem. The Chinese have all the power they need. The US does not, and it’s going to be, it’s go and there’s, there’s already some dark data centers that have never been turned on, and it’s gonna be a more and more of that. And so something needs to get, and we think that the PCM is what that thing is.
Brian Thomas: That’s awesome. Thank you. You brought up several examples, obviously the satellites the LLMs, you really dove deep into that, and I, I appreciate you unpacking that for our audience that software of yours, obviously provides a lot of solutions, but your AI and you’ve got some patents there.
Your PCM technology allows the LLM to continue to learn and become way more efficient, obviously safer. We don’t like hallucinations. We talk about this a lot on the podcast. So I, again, I appreciate all the insights there of what you’re doing and making a better, safer world with this new ai.
And Charles, the last question of the day, as we look to the future, how do you see the future of data transmission, AI and global connectivity evolving, and what role will companies like Atombeam play in shaping that future?
Charles Yeomans: Well, I think it’s pretty well understood that most of the best innovation comes out of little companies.
And, there’s a reason for that. It’s where. In small organizations. I mean, like DARPA invented the internet, right? So if you think about like all the where Atombeam plays and where companies like Atombeam play, it’s really where outside the four walls of big companies.
So big companies come up with great stuff, and I’m not arguing that they don’t, and some of them are incredible. And they have infinite resources in many cases, effectively. But we think that we are going to that, that some of the really best stuff, and it isn’t just me thinking this. This is true, many of the best, much of the best stuff comes from the little companies who are entrepreneurs that some guy has a great idea.
He sees an opportunity. He sees a chance to make a lot of money. He sees a chance to really change the world. And that is what is, we’re, we think we’re about, we know we’re about and you get things like going on like, like our PCM I mean, it’s not invented by. Like Google has, I think it’s 35,000 PhDs.
I mean, but we embedded it. Yeah, we’ve got some PhDs too, but a lot fewer than 35,000. And at the same time, you look at the need for, the need is coming. I mean, anthropic, their, their uptime just went below 99%. And open AI, they kill their video product, their SOA because they needed more GPU time for coding tools.
And JPU rental prices, they went, they’ve almost gone up 50% in the last couple of months. So, it’s, there is an enormous issue going on here with the need for somebody to come up with something new here. It’s not. It’s not, you can’t, it’s, it’s unsustainable. And so, the little companies like Atombeam are the ones that I think are not, that the big companies are coming up with stuff too, but that some of the best stuff is coming from the little companies.
And the little companies who are driving that innovation are, I think, I mean, yeah, it’s a tough environment, these big companies. We’re tough players, but we think that we have something and we’ve thought through it very carefully and we’ve put in a lot of patents for it. We have over 500 patents, total.
And all of that is massively important when it comes to how, where we’re going, how we can, as a small company impact things, and how other little companies can impact things. You don’t go into this game and, with, without your eyes open. But if you can go into it and be a real player, and that’s what we are.
Brian Thomas: That’s awesome. Thank you so much. I really appreciate that. And the big takeaway here, Charles, is, as you said, most of the best of innovation comes out of small companies. Passionate entrepreneurs, obviously they have small teams, they can pivot on a dime. They have a vision. They’re excited, and they’re enthusiastic.
And don’t get me wrong; big companies can still produce good products, right? But typically, because of their deep pockets it’s generally and the, the large resources that they have at their fingertips. So, I really appreciate the insights today, and Charles, it was such a pleasure having you on, and I look forward to speaking with you real soon.
Charles Yeomans: Well, same here, Brian. I really appreciate you having me. Have a good week.
Brian Thomas: Thank you. Bye for now.
Charles Yeomans Podcast Transcript. Listen to the audio on the guest’s Podcast Page.











