Matthew Kael Swanson Podcast Transcript
Matthew Kael Swanson joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, Home of The Digital Executive Podcast.
Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand.
Welcome to The Digital Executive. Today’s guest is Matthew Kael Swanson. Matthew Kael Swanson has spent more than 15 years building and deploying artificial intelligence systems in real world commercial environments.
His work in AI began at Carnegie Mellon University’s Robotics Institute, where he helped develop self-driving technologies before moving into entrepreneurship. He went on to found SpeakerText, a Google Ventures backed AI company that pioneered crowdsourced approaches to scaling enterprise workforces.
SpeakerText was acquired by CloudFactory in 2012 and in 2016 he founded Augment CXM, which developed some of the earliest large language model based tools designed specifically for enterprise contact centers. The company was acquired by Sutherland Global in 2022. Well, good afternoon, Matthew. Welcome to the show.
Matthew Kael Swanson: A pleasure to be here.
Brian Thomas: Awesome. Thank you, Matthew. I really appreciate it. And I know you’re hailing outta the Bay Area there in San Francisco. I’m in Kansas City, so I appreciate you making the time. Just a two hour time difference today, but still appreciate that. And Matthew, if you don’t mind, I’m gonna jump into your first question.
Most software vendors charge a flat subscription, whether the tool works or not. How do you see this changing with AI?
Matthew Kael Swanson: Well, there’s a kind of bigger change happening right now in business models, and this happens every so often. We saw back in the day software vendors charge license fees and then, Salesforce showed us a better way with seat based pricing.
Not too long after that, Amazon showed us usage-based pricing. Here we are yet again with another shift. And really what, the Silicon Valley circles are talking about is outcome-based pricing. And, really what that’s all about is trying to align on value. What, at the end of the day how can a vendor work with a customer and provide ROI that that’s why this shift is happening.
So, getting back into your question. What we’ve seen happen is because AI can now do labor and not just be a tool for employees, it can start to, take on higher value tasks and it can start to share in the risk and upside. So what does that translate to? For us, outcome-based pricing is commissions, right?
In a sales environment. That’s what we do at StaffAI. We provide an inside sales rep. And so that’s how our AI charges. It’s not about how much the AI works, it’s about how much gets produced through revenue, which we then share in commissions.
Brian Thomas: That’s pretty cool. And I saw that, initially when we reached out with your team to get you on the podcast here I thought that was interesting.
I’m glad you’re breaking that down for our audience. But you’re right, licensing has been all over the board. I don’t know who does it. Worse, Microsoft or others, but always change the game on us. Right? From seats to subscription based to you name it. But I love the outcome based pricing and I’m glad you kind of broke that apart for us today.
Really at the end of the day, you are trying to align that value with that type of pricing. So, thank you. And Matthew, a major workforce concern involves AI taking jobs. What are you seeing happening at companies hiring your AI employees?
Matthew Kael Swanson: Yeah, I mean, this is absolutely the conversation of, the workforce right now. AI is obvious at this point. You know that it’s going to keep improving and accelerating, how much of a task it can do. So. We see this a lot it’s in our name StaffAI, right? We are we’re definitely seeing that trend of AI being able to do end-to-end tasks that normally people would’ve done.
And so, just what we’re seeing inside of the workforce is that, that employee StaffAI is absolutely the AI is now able to do desktop work. It not just as well as better than. People can right clicking around screens, updating CRM records, going, gathering leads, that’s all, that’s all AI territory.
What we’re seeing is, we recently deployed into a company, into a 12 person department and yeah, there, there was some displacement, the AI came in and did all of the back office type of clicking around. But more interestingly, the of the team of 12, eight people are left. And what the company did is repurposed those people to start having more high valued interactions with their customers.
To make this a little bit more tangible, this is a property broker. And so some of those people are now going on site to the properties to take better pictures of the properties that the customer was providing. As one example where there’s no way the AI could ever do that, right? They can’t move around.
Robotics is a long ways off. So, basically going to the customer having a, closer read on what’s really going on in their world, translating back into the ai, those are emerging jobs that companies didn’t really have the, business case for. Well, now that you can automate the back office work they do. And that’s what we’re seeing.
Brian Thomas: That’s awesome. I really love that. And you’re keeping really a cohesive machine and human together, not necessarily cutting jobs like we’re seeing lately in the news. And again, I don’t know if that’s, some of it’s clickbait, some of it’s an excuse to, to downsize companies.
We all know it’s coming, but I liked your example there of, we know AI is coming in continually improving and doing desktop work, but. Your example of that property broker company where they were able to re, with the, your technology, they were able to keep the people repurpose those four out of 12 so that they could do higher, more critical tasks that AI couldn’t do.
So, again I appreciate that. And Matthew, every CXO is under pressure to deploy AI fast, but they’re also worried about their proprietary data ends up in someone else’s training run. How should mid-market companies be thinking about that trade off?
Matthew Kael Swanson: I was just saying that the AI taking my job is the conversation of the workforce.
Well, AI taking my data is the conversation of management is what we’re seeing right now in, in the marketplace. And so, yeah, it’s a huge question. And this is an age old question that’s just become a lot more pronounced because AI can now do so much more of the business process.
And so, the what we’re seeing and what we are leaning into with staff to address this is the idea that, if you really think about AI as labor, you can start to contract with it differently too, right? And this is gonna get into the data protection issue, which is.
If you look at the data terms in a in a legal contract, you look at it from a software services agreement. There’s so many opaque terms, right? Like, yeah, what the heck is metadata? Right? It is intentionally ambiguous for the vendor because they can now, do a lot more with it.
Well, if you compare that to an independent contractor agreement, you know how you work with a person, it’s a lot clearer. It’s a lot easier to understand who owns what. That’s what we’re seeing and what we’re doing actually with staff AI is our customer actually signs an independent contractor agreement with their AI employee.
They don’t sign a software services agreement, so they have data protection terms covered from that front in a, in, in a relatable way. But also, from that point onward, everything is the same as working with a remote contractor. When you bring on a person to your company, you give them a machine, you give them a desktop.
You give ’em software licenses and that’s your protection, right? If you offboard that employee, you retain the machine, right? And that’s what can now be done with AI employees. You can give them a dedicated desktop that they can use in staff AI case. Our AI actually just with computer vision clicks around the screen.
There’s no API access going into the database. They’re using the same access control layer of operating and accessing data, just like. Just like a human would. So that’s one way to address the data protection issue is treat the AI like an employee and it, it just simplifies the process.
Brian Thomas: That’s very cool. And I never even thought. I thought of it that way, but it makes sense and I’m sure it works here in your case, but with data protection I did think it was interesting about an AI employee, like a human employee and you set them up with your own device so that you have control over that.
But an independent contractor agreement I thought was very, very interesting. For sure. So, thank you for sharing that. I know the big talk today is about employees losing their jobs to AI, but the data protection piece of it which the executive team is always looking at, and I really like your insights there.
So, thank you. And Matthew, the last question here I have for you. In your work, you express an interest in the concept of flow and how it is missing in so many work settings. What is flow and how would AI staffing help people experience it more often?
Matthew Kael Swanson: Yeah, I mean this, yeah, this is definitely something personal and a journey I went through as a roboticist and, creating different technology companies.
I think this challenge over and over again of, the limits of conditional logic, right? There’s only so much you can program about a situation that is natural to how a human would operate in that same environment, right? When you break it down into if then else, you miss a lot of. The natural way you might go about handling a task.
There’s so many nuances that you can’t cover in if the now statements. So that’s, it was kind of a realization that came at some point in my career. And, what got me excited about technology again, is what we’re seeing now with Gen ai, what we’re seeing now with these large language models where and more particular reasoning models.
Where they don’t do if then, else, they take props. They take these very, abstract sentences. We’re, we’re all used to this now, but this is pretty novel that it can take something general like that, general, like something a person might say, and then reason through it and handle all those different, ambiguous situations correctly.
And what that allows people to do is stop having to, try to fit everything into a box. If you can have your AI handle things like you would, it frees you up to stop one stop thinking about all those different situations in these really confined ways, but also free you up to go out into the real world.
And that’s really what human beings where we thrive. We like sitting at a desk behind a computer clicking on boxes. That was not how our bodies were built. So I’m excited about the world in which people get liberated from behind the machine, get to go out into the field, out in the real world, interacting with other people. I think there’s just so much demand for it, and I think people will be a lot more satisfied with their lives when they get to do it.
Brian Thomas: That’s awesome. And you’re absolutely right. I don’t care who you are. And some people do like their office job, but it seems like we are in a box. We’ve got a square computer, a square device, a square office, a cube for an office, a square for a desk.
And it just it’s kind of cliche, but I liked how you talked about that flow. Your journey as a roboticist, you’ve learned a lot along the way. Using reasoning is where AI is headed, and if we can get folks out of these boxes, so to speak and moving, doing. More fun things, interacting with other humans, getting out of the box and doing some higher critical level tasks is where we need to be.
We just gotta make sure we keep the guardrails on this stuff. I’ve talked many episodes this past year about the guardrails and kind of where we’re headed with the acceleration of ai but thank you so much. I appreciate your insights, really do.
Matthew Kael Swanson: Well, I’ll put a I’ll add onto that with let’s liberate people from boxes. But keep our AI in them, right? We just talked about how do you protect your data by putting AI onto a machine, putting it onto the box. There is this concern about giving all of your data, all of your interactions to some cloud AI model, right? This is kind of what most companies are doing right now.
That’s super scary. That’s super risky. There, there is a pathway here where people can, have agency over AI. By following this model of, put the AI onto a machine, the machine is localized. And then, the flip side of that is if, if the AI is operating inside of the box, the people can outside out, operate outside the box. And that is a positive direction we could all look forward to.
Brian Thomas: Absolutely. And I do appreciate you again highlighting that. I get let’s get that mundane, boring. Work and give that to AI and let us go do the fun stuff and make the world a better place. So thank you. I really appreciate that. And Matthew, it was such a pleasure having you on today and I look forward to speaking with you real soon.
Matthew Kael Swanson: Alright, thank you for having me.
Brian Thomas: Bye for now.
Matthew Kael Swanson Podcast Transcript. Listen to the audio on the guest’s Podcast Page.











