Ed Watal Podcast Transcript
Ed Watal joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, home of the Digital Executive podcast.
Welcome to the Digital Executive. Today’s guest is Ed Watal. Ed Watal is the founder and principal of Intellibus, an Inc 5,000 top 100 software firm, headquartered in Reston, Virginia. He serves as a trusted board advisor to some of the world’s largest financial institutions where C-level executives rely on his expertise and IT strategy, enterprise architecture, and digital transformation.
One of his flagship initiatives is Big Parser, an ethical AI platform, and global data commons dedicated to transparency and responsible AI development. A seasoned entrepreneur, ed has successfully built and sold multiple tech and AI startups before founding Intellibus. He held leadership roles at major global financial institutions, including RBS, Deutsche Bank and Citigroup.
Well, good afternoon, Ed. Welcome to the show!
Ed Watal: Good afternoon, Brian. It’s great to be here.
Brian Thomas: Absolutely, my friend. I appreciate you making the time, and I understand you’re in Jamaica right now, which is awesome. I know that’s not normally where you’re at, but I’m so jealous right now. I’m in Kansas City, sweltering in this humidity.
So thank you again. And Ed, if I could, I’m gonna jump right into your first question. You’ve advised C-level executives at some of the world’s largest financial institutions. What are the top digital transformation priorities you’re seeing in finance today?
Ed Watal: Within the finance industry, fraud has been one of the biggest challenges, and AI has of course created the possibility of enormous fraud because you can create deep fakes.
We had recently Sam Alton talking about how it is so scary that you call a bank and you ask for a significant size wire transfer and all they ask you to speak a code on the phone, and that could be easily defect. So the entire financial industry is grappling with this risk, and that is definitely one of the biggest challenges today.
Brian Thomas: Thank you. Absolutely. You know, I think the last almost two years here on the podcast didn’t matter who I had on, we talked about ai, DeepFakes, fraud, et cetera, but you are certainly up for the challenge working in finance with ai. You know, it levels the playing field for the good guys. When it comes to creating amazing solutions for the world.
But on the flip side, when you’ve got bad actors using DeepFakes to steal money, that’s a whole nother level and challenge that we need to address. So appreciate your insights, Ed. Big parser is described as an ethical AI platform and global data commons. What inspired you to create it and how does it differ from traditional AI platforms?
Ed Watal: Interestingly enough, about 20 years ago almost, I had this epiphany of lack of a better term, you would say, dream in my head to say that there is a way and there’s gotta be a better way how humans can contribute data in an ethical, responsible manner to create something like Chad, GPT. Obviously this is 20 years before Chad GPT existed, and my hypothesis was that eventually someone would end up creating something like chat GPT because.
I was a big fan of the movie Iron Man, which I think a lot of Marvel fans would appreciate. And as I would see something like Jarvis come to life and the movie, I would always imagine what it would take for society to create something like that. And Chat GPT is not Jarvis, but Chat GBT comes pretty close to a lot of things that you would expect a Jarvis like thing to do or AI to do.
And my hypothesis was for something like that to happen. All the data on the internet has to be essentially fed into an AI engine. And what was blocking me from doing that was obviously an ethical concern. A lot of people would argue that open ai, when they put all the internet data into Chat GPT sort of crossed that ethical boundary and big bars was.
An alternative approach to solving the same chat GPT problem, which was largely to say we would follow the Wikipedia approach. We would collect and organize all human data on the internet, much like Wikipedia has done, except we’ll store it in a data store, like a database, and then we’d feed that database with all the good clean information on the internet.
What we’d call the data commons into an AI engine, much like a transformer model or an LLM back then, there were other models, not transformers, but that was the original idea. And again, the hard part there was collecting data. So we really focused on building a community of individuals, which included everyone from high school kids to the head of AI for the Pentagon.
Uh, people would come in and sit in a workshop for several hours. And we had several Marines from the US Army come and do that, and they would sit there and they would feed data into grids, like how people feed data into Wikipedia. Except it took us almost a decade to get, I would say, fairly insignificant amount of data compared to what data exists on the internet.
And at some point it was. Already the, you know, the ship had sailed. Someone had taken data from the internet and chat CPT existed. So yeah, then we paused the idea of collecting that data, but we come up with an alternative approach on how to solve the problem, which I think is still solvable. We don’t have to take it as a foregone conclusion that human data has to just be taken off the internet without permission.
Brian Thomas: Thank you. I appreciate the backstory on that. I think we’re all kids at heart. We have big imaginations and you’d mentioned, uh, iron Man, which is obviously one of my favorite movies as well. Storylines back from the day in the comic book days. You know, you had that dream to create something that was really powerful, yet ethical.
And right now, in my opinion, ethics and guardrails are not keeping up with acceleration of ai. As you know, all these companies are leapfrogging each other and we’re now, as they say, we’re. Right at the cusp of having artificial general intelligence, which is amazing. But at the same time, I’m a little bit scared about the potential of, if we’re not having these guardrails in place, what could happen.
But I love the fact that you’re on the right track with your ethical AI and your global data commons idea that you had, and, and you’re working forward to bring that into play as AI continues to evolve. So I appreciate that. And having founded and exited multiple tech and AI startups, what do you think are the critical ingredients for building scalable, responsible, and ethical AI companies?
Ed Watal: One of the foundational guardrails, building an ethical AI company is to think about where and how you’re sourcing the data that you’re feeding into an AI engine. Let’s say you’re in the business of training a model and. We can really bucket companies in the world in two big buckets. One, the companies that are creating models and the others that are using models.
So the companies that are creating models for them to be ethical and responsible, they have to think about where they’re sourcing the data from. If they’re sourcing their data from an open, ethical source like a data commons, which is human curated data, for example, Wikipedia or big parser, then it is definitely ethical and responsible.
Versus you’re taking data from any other website, and I don’t wanna name organizations for them to feel that they’re doing something unethical, but there are several other organizations that have these stores where you can go get AI training data from. How do you know where that data came from? What was the source?
How did it originate? Where did it originate from? That’s a question you must ask ’cause that’s foundational. And then once you’re on the other side of the coin, let’s say the model’s been created ethically or not. Using ethical or unethically sourced data, you’re using the model. What are you using the model for now?
If you’re using the model for summarizing information that you’ve created, it’s probably an ethical use If you’re using the model for making the world better, in some sense, it’s. Ethical. The moment you’re starting to use the model to do things which are like creating deep fakes or trying to game or manipulate society, then that’s when it becomes challenging.
And there are several companies who are trying to do that. Now there’s a question, a very important ethical question there, which is around jobs. Almost everyone’s afraid. Of course, some people are afraid, a GI will come and robots will take over the world. But that’s a doomsday scenario and I’m not a big proponent of that line of thinking.
But the line of thinking that I do care about deeply is how will AI be used in the context of jobs? Jobs are what keep the economy running. And with ai, there’s a significant fear that a lot of jobs will be lost and lost lock stock in battle. For example, in Manila, a call center laid off 800 workers. Not that it’s a significant layoff.
There are layoffs happening in the US at times with tens of thousands of people, but that 800 people layoff was purely because someone replaced the work they were doing with an AI agent. And those things will happen more and more in society. So. You could make a lot of money as a company, as an AI company trying to get rid of jobs, ’cause that’s obviously possible.
But you could also make a lot of money by investing in creating jobs. And there are companies that are investing in things like that. For example, can we take AI models and make anybody a software engineer, even if you don’t know how to write code, and empowering and democratizing technology. That’s on the other side of the fence where people are using AI to create jobs.
So I think if you’re on the, using the model side, you wanna consider that. Are you taking jobs away or are you creating jobs?
Brian Thomas: Thank you. I really appreciate that. You know, I have some good takeaways there. Ethics is really near and dear to me as far as AI here, just in general. But you know, you’d broken apart early on in the conversation where and how you’re sourcing your AI data.
You’ve got obviously the companies that are creating that data and there’s companies that are using that data. And at the end of the day, it’s really foundationally what are you using that data for? And I do wanna switch into the economy a little bit from a MIT professor. I had, I had a course I took, you know, he said, yeah, there’s gonna be a lot of jobs that are eliminated, but there’s gonna be so many new jobs that have never been ever created before, will be created in this, I guess, AI era that we’re in.
So it’s certainly an interesting topic and we could go for hours on it. Ed, the last question of the day. Looking ahead, how do you envision the role of ethical AI in digital governance evolving in the face of rapid advancements in generative AI and autonomous systems?
Ed Watal: Those are actually very interesting questions.
Digital governance is a topic that’s very dear to my heart. I invested a lot of time thinking about it and building solutions around that space and investing in efforts around that space. One of those efforts that you might be familiar with is. The World Digital governance effort that I’m pretty closely involved with, uh, that’s wdg.org.
And as part of that effort, the key questions that are being laid out or rather proposed is how do we accelerate AI and what are the guardrails that we need? Because if we think of digital governance as hurdles, as means to slow AI down, then it is not net productive for society. ’cause there’s so many cures for diseases you could find and find them quick.
There’s so many vaccines. You could create so many good things you could do with ai and therefore there is a need to accelerate AI and acceleration of AI without guardrails could be complete chaos and mayhem. So what are those? Guardrails is the key question. And those guardrails. Are based on some foundational principles.
So what are those principles is another key question. Often governance is about policy or regulation, but we are asking people to peel that onion back and say Whatever your regulation or policy is, what guardrails is it really enforcing? And then what principles are those guardrails based on? And those are some key questions that we are asking people do, and those are some key questions that we are asking people to contemplate on.
Brian Thomas: Thank you. I think that’s very profound. You know, it’s really that question you’d asked, uh, initially, is how do we accelerate AI and what guardrails do we need? I totally agree. I’m looking for the positive in ai. It’s so much potential to do good and do so many things and cure diseases, but as you said, acceleration without guardrails will be chaos.
And so we need to step into this. Logically and think this through. I think we can have really the best of both worlds here. Having the ethics that keep AI from really getting outta control. As you know, it’s just a matter of time and super general intelligence will be here and that will be a game changer.
And hopefully we all have our ethics in place. Ed, thank you so much. It was such a pleasure today, and I look forward to speaking with you real soon.
Ed Watal: Likewise, Brian. Thank you for having me.
Brian Thomas: Bye for now.
Ed Watal Podcast Transcript. Listen to the audio on the guest’s Podcast Page.