Anatoly Kvitnitsky Podcast Transcript
Anatoly Kvitnitsky joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, home of the Digital Executive podcast.
Welcome to the Digital Executive. Today’s guest is Anatoly Kvitnitsky. Anatoly Kvitnitsky is the founder and CEO of AI or Not, an AI detection startup focused on helping organizations safeguard against the growing risk associated with generative ai, including DeepFakes fraud and misinformation. Under his leadership, the platform offers tools that enhance compliance and protect the integrity of digital ecosystems.
Prior to founding AI or Not, Anatoly was a principal. At American Express Ventures where he led strategic investments in emerging technologies across commerce, payments, fraud prevention, data analytics, and cybersecurity. He also served as vice president of growth at Trulioo, a global identity verification company where he oversaw strategic sales partnerships, r and d, m and a, and fundraising, helping scale its solutions to promote financial inclusion and best practices in data privacy.
Well, good afternoon, Toly. Welcome to the show.
Anatoly Kvitnitsky: Thank you for having me.
Brian Thomas: Absolutely. I appreciate it. Making the time. You’re calling out of a Bay Area today up in Northern California, and I appreciate that. I’m in Kansas City, so our two hour difference, not a big deal today, but Toly . I’m gonna jump right into your first question.
What inspired you to launch AI or Not, and how is your platform tackling the rising threat of generative AI misuse, like DeepFakes and synthetic media?
Anatoly Kvitnitsky: Yeah, really great question. I spent the majority of my career tackling fraud and KYC problems. Uh, first at the largest credit bureau in the world, actually down in Orange County where you and I both resided at some point, Brian, and then at a startup of that became a unicorn in the KYC space.
So in my career of a decade or so of. Fighting different versions of fraud and KYCI was, it was really a game of whack-a-mole of fighting against like new technologies and what they brought to the table. So when I saw, fast forward to end of 2022, beginning of 2023, when I saw what was happening with generative ai and even though at that point, you know, fingers were wonky, you know, the content wasn’t exactly photorealistic.
I knew it was only a matter of time before it was going to be. And that really inspired me to start AI or not because I knew a lot was being invested into generative ai, but I didn’t think enough was being invested into stopping the dark side of the technology. So that’s where I really started AI or not for towards the, uh, latter half of 2023.
Brian Thomas: That’s awesome. And yes, we’ve seen such a, a lot of things happen in the last, just couple of years, the way the technology and AI is leapfrogging. But I love that you shared, uh, a passion of yours. You’re obviously working at KYC fighting fraud and abuse. Seeing that the technology capabilities is only gonna get better, that you jumped ahead and said, you know what?
There’s a problem here, or a potential problem, it’s gonna be huge. Inspired you to start your company, AR or Not, and I really love the story. Toley, AI or not helps organizations detect AI generated content. What are the key signals or patterns that your system uses to identify fakes?
Anatoly Kvitnitsky: Yeah, it varies on a modality basis, and we cover all, so we do image video, deep fakes, audio, uh, and soon to be, uh, text.
So on the, I can go one by one and I’ll, and I’ll keep a brief. And then Brian, you fire away with any follow up questions. So, on the image side, there are a few different foundational ways that models generate images, whether it’s Mid Journey now chat, BT four oh or Flux. They all have distinct patterns.
Each one of their images that our machines can read and identify as AI generated, they have gotten so good. Like the naked eye can no longer tell. Uh, many times I look at the stuff all day, I can no longer tell Elon Musk, one of the creators of, uh. Image generator among many other things said he can’t even tell what’s AI or not nowadays.
And that’s an exact quote, and it really boils down to the patterns that each one of these generators make for audio. It’s the wavelengths that the generators make versus what a real drum sounds like or a real voice, whether singing or speaking. Sounds like though the pitch and the style might sound exactly the same to the human ear.
The computer and the algorithms that we create pick up the distinct differences in the wavelengths that each of them generate. For video, it’s kind of a combination of all of the above. You’re analyzing the frames, you’re analyzing the, the pixels within those frames of the video, and we also run through all the audio and then finally text.
There are definitely certain words, phrases, and combination of words that generators use much more frequently than that of. People, um, and we pick up those signals as well. So it’s really the overlying theme is identifying the exact patterns and signals that AI foundational models generate versus those that human generated content creates.
And it varies, uh, uh, for each different modality, and we work on all of them.
Brian Thomas: That’s amazing. Thank you for sharing that. You know, deep fakes, video images you’ve mentioned soon to be text messaging, which is I think is pretty cool. All the top platforms obviously have their certain patterns in the way they create their generations, but your algorithm analyzing these images, frames and audio takes that to another level to really discern what’s fake and what’s not.
As you know, humans, it’s easy to understand. Humans are so unique, but also they. Make a lot of, I guess, mistakes along the way. That’s one of the things that helps us detect human activity, but I really appreciate you breaking that down for our audience today. Toly. The next question I have for you is, what industries do you see as most vulnerable to AI driven threats right now, and how are they responding?
Anatoly Kvitnitsky: Yeah, I’m happy to, and I’ll cover this one from a both a long term and a short term perspective. In the short term, I think it’s sort of with news. I think that a more recent example is the protests in Turkey that were very real and very important to that country. And what was happening was there were AI generated images depicting scenes that actually never occurred.
So it actually took a little away from the message. But news outlets were actually reporting on those AI generated images because they were much more, you know, social media friendly. They were much more clique worthy because it had Batman and Pikachu and Joker and all these amazing dress up characters and protests, which actually never happened.
And I think for news and the information that we get, it’s really important to. Discern what’s real, what’s not, what’s misinformation and what’s actually real. This was a political protest, but when you think about elections and other ways that people gather information and make decisions, being able to tell what’s actually AI or not is quite important, um, to be able to do so.
And when you have news outlets reporting on actually AI generated content, it becomes quite difficult for people to do so. I think after that it follows more like on the social media side of them. Being able to determine that and making sure that, you know, the information that’s being shared is actually realistic and not a deep fake, or not misinformation whatsoever.
And there’s actually countries like Spain, I. And in China that are putting in laws go into effect this year where you have to be able to determine whether something’s AI generated or not. And then finally, like more in the shorter term, uh, I think our financial services industry is very much under attack.
So whether it’s AI generated scams or voice impersonations or KYC documents that are AI generated, all of those things are happening now with this new technology. And a lot of times you need to fortify existing systems to be able to, to protect against it. And we’ve seen all sorts of crazy use cases too.
We’ve even seen an insurance company who was getting AI generated x-rays. Uh, dental X-rays to try to get insurance money out of, uh, out of the company. So the, the use cases vary, but those are really the short term ones that keep coming up, uh, again, again and are happening today. Long term. I think the internet, as we know it, is very much in danger.
If you have an internet where. You know, the content that’s being produced is AI generated and some reports have it, uh, projected at 90% of all content will be AI generated over time, and all the comments that you receive are all AI generated. So what kind of environment is that when it’s just essentially bots, talking bots and we are very much keen to that and would like to play our part, which I think is a very important role and to try to protect both the information and, and against misinformation on the internet.
But as well as the overall world of what that looks like. ’cause I don’t think any of us, uh, you know, sign up to listen to podcasts of two AI speaking, which has happened before. Uh, nor do we wanna sign up and browse the internet of all AI generated content and AI generated comments below it. So I think we’re gonna see a world where we might wonder, do we go too far with AI generated content?
And, uh, AI or not, I think plays a really important role to try to protect against that.
Brian Thomas: Thank you. Appreciate that. You covered quite a bit there. And uh, yeah, I did see that about the protests in Turkey where AI was depicting images and frames of things that weren’t really, or didn’t actually happen. You know, we saw this even non-AI stuff where news outlets would pick up stuff.
I remember during Covid where picking up stuff that weren’t correct. Exactly. Right. And there was some, obviously some people suffered some of the consequences of that by not reporting or, or at least verifying the information. We’re seeing some laws in China and Spain, as you mentioned, and I think that’s important.
But yeah, podcasts. Gosh, if I could talk about that. Google’s notebook, LM is pretty cool. If you give it some prompts, it can actually do a podcast between two people, which is phenomenal. I think that’s good and bad at the same time, so thank you. Toly. The last question of the day I have for you. How do you view the future of digital identity in an age where AI can convincingly mimic real people, voices and behaviors?
Anatoly Kvitnitsky: It’s a scary thought and one that it’s near and dear to me. Um, haven’t spent time with credit bureaus and KYC companies, you know, verifying millions upon millions of, uh, of individuals. And I think a lot of the processes exist are actually okay. Just need to be fortified against. This new kind of dynamic of whether it’s DeepFakes or a generative ai, I view this very much as like the new sympathetic identity.
Instead of, you know, spamming a credit bureau with data until a new identity is created, you’re essentially providing different pictures or now videos of a really realistic person and you’re trying to convince someone, whether it’s a platform or another individual on the line, on the other side of the transaction that yes, this is indeed real.
And there’s a lot of repercussions to that, whether it is money laundering. If a bank lets in a, a fake person who’s actually AI generated, now the money’s being used for not so great things, or it’s a, say, a transaction on a marketplace like even like Facebook marketplace. Where the pictures on there are are not real, and the person behind it is not real.
There could be really, really negative consequences there. And then even in the case, um, we have this with the user who actually got defrauded for $150,000, uh, where she was. Conned into buying fake art pieces and fake art because the artist created a bunch of AI generated pieces, sold them at his own, even had a whole art exhibits with these pieces and without checking.
One of our users who later found out through the use of AI or not, um, was actually, you know, was actually AI generated. I think this all starts with identity. If you can’t find who the person behind it is, whether it’s a financial transaction or a trust transaction, like on the internet, I think it has really, really negative consequences like financial and just trust on the internet.
I. And the example that you use like that to AI speaking to each other, I think is a really, really cool example. But I also don’t think that’s what, you know, people wanna listen to on their drive home is just AI speaking to ai when they sign into social media. I don’t think they want, no one wants to see.
AI generated content with AI generated comments behind it. I would like to, I don’t for one. And I think there’s a lot of negative repercussions if, if we, you know, if we allow the world to become like 90% AI generated. So, and I think a lot of it starts, especially with transactions and conversations, it starts with, uh, digital identity.
And I like to think, uh, I have been playing a, a positive role in it and then continue to do so.
Brian Thomas: Thank you. I appreciate it. Digital identity is gonna be so important going forward. We’re just seeing this proliferation of generative AI across every industry and platform, and again, I know there’s a lot of great intentions, but they say the road to hell is pave with great intentions.
Right. I just really appreciate you highlighting the stuff that you’re doing today. I know we’ve got good processes, but we still need to continue to fortify and keep up with the advancement of generative ai. What I’d like to say Toly is I really appreciate your time on the show today, and I can’t wait to talk to you again.
Anatoly Kvitnitsky: Brian, thank you so much for having me. It was a pleasure.
Brian Thomas: Bye for now.
Anatoly Kvitnitsky Podcast Transcript. Listen to the audio on the guest’s Podcast Page.