Say the words “synthetic media,” and most people will immediately think of deepfakes. With the rise in stories about issues with deepfakes and the issues with national and international security, it’s paramount to take extra precautions to keep products safe and secure, but what exactly is synthetic media?
Synthetic media, an umbrella term that describes “any media created or modified by algorithmic means, especially through the use of artificial intelligence algorithms,” includes artificially generated voice and video. For example, deepfake Tom Cruise has racked up millions of views on TikTok and opened the door to more mainstream deepfakes, including those for commercial use. In fact, companies including Amazon and Microsoft are currently using synthetic media to build their advertising campaigns, and many other industries are starting to incorporate it into their sales initiatives.
While synthetic media can be used for entertainment and commercial use in a multitude of ways, the largest issue concerning synthetic media is its negative and malicious use, which has been utilized to spread disinformation, propaganda, and to erode public trust in institutions. Just last month, several European mayors were duped into a video call with a deepfake posing as Kyiv mayor Vitali Klitschko.
The need for security and safeguards in synthetic media is apparent.
Synthetic Media Must Be Designed with Security in Mind
Creating security products in an open source environment helps to improve the safety of synthetic media products. An open source refers programs whose source code is made available to users or other developers to modify as they see fit. This means that open-source software is decentralized, allowing for more collaboration, peer review and community input.
Deepfakes Can Be Created for Good
It’s important to note that synthetic media, as well as the use of deepfakes, can be used for good. For example, synthetic media can democratize content creation, making it easier and more cost-effective to produce audio, video or image content without the need for costly, physical processes.
It can also be used for educational purposes, providing historical context for students. For instance, the use of holographic videos of Holocaust survivors sharing their experiences to younger generations; and the use of voice AI, which allowed for the posthumous delivery of President John F. Kennedy’s resolution speech to end the Cold War.
Synthetic media technology can provide a plethora of educational uses and entertainment for the masses; however, its biggest threat comes from bad actors including online scammers, hackers and extremists. They can also create videos of public figures without their consent and can produce adult videos using someone’s “face” without their consent.
In addition to spreading mis- and disinformation, bad actors also have the ability to take away all the good things that synthetic media provides. In other words, the bad actors ruin it for the good ones. That is why it is imperative to ensure security in synthetic media.
How Do We Secure Synthetic Media?
With the negative, malicious reputation deepfakes have, and their potential for abuse, it is paramount to take extraordinary steps to keep products safe and secure. When I co-founded Potion, it was intentionally designed to prevent abuse and to prevent deepfakes from being used outside of their intention.
Security in synthetic media should always be priority number one.
API access for authorized users, identity verification, voice fingerprinting and transparent watermarking for identifying distributed content should be the baseline of synthetic media’s security feature.
The second most important priority? Buy-in and adherence from your clients.
Recently, my company decided to fire a client who shared their account credentials allowing a colleague to use their account. This was a clear violation of our security policies. Many may think, “but he had his employer’s consent.” The truth is that these policies are in place to ensure that malicious deepfakes cannot be used for our products, even if that means having to let go of a client who is not abiding by our policies.
The bottom line is that synthetic media is still an emerging technology, and deepfakes can and have created serious issues. That’s why it is up to those working in synthetic media to create better security systems and features to ensure that protections are in place to stop the negative, malicious use of deepfakes and to ensure public trust and confidence in our products.