4 Ways to Mitigate the Risks Associated with Generative AI

31
human facing a generative AI bot

From Google’s issue with Google Gemini and its false representation of historical images (putting it nicely) to its new AI Overviews search feature that catastrophically failed, generative AI isn’t exactly receiving good press right now. To be honest, it was as if Google released the technology without testing it first.

Still, that isn’t to say that even with the most thorough testing, generative AI wouldn’t be without risks. It’s a new and developing technology that’s only recently become somewhat ready for the masses to use, and it seems it isn’t always as ready as we’d like it to be.

One of the reasons Google gave for the AI Overview mess is that you can’t prepare a system for millions of people to use. The thing is, you can. They just didn’t.

And the risks are something people are becoming concerned about. 67% of senior IT leaders will prioritize generative AI over the next 18 months, but 91% are worried about security risks, and 73% are worried about biased results.

Want to use AI but worry about those risks? Below, we’ll give you four ways to mitigate the most common risks associated with generative AI.

Credit: unsplash.com

Security Risks

Google it, and you’ll see security risks are one of the most common worries about generative AI.

The first concern is information leakage, whereby AI could use information a user inputs as part of its learning and then distribute it to other users. Users can mitigate this risk by using an AI platform that doesn’t use information input as part of its learning.

Another risk is data poisoning, whereby malicious data is inputted during the training phases of the AI development process. The result is significant and unexpected deviations in its output. Using AI guardrails can prevent harm from data poisoning by overriding it in real time. They are predefined rules and protocols that manage and regulate behavior in an AI system. Through integrating guardrails, organizations can create a proactive protective environment against malicious activity on their AIs.

Inaccurate/Inappropriate Results

Again, just Google the recent AI mistakes with its AI Overview feature to see how inappropriate and inaccurate results from generative AI can be. 

The potential propagation of misinformation or offensive content is massive. It’s essential to ensure rigorous validation processes are in place to address this issue effectively. That includes cross-checking the outputs provided by the AI against well-known sources of information and providing users feedback on mistakes made by the system, especially by asking them to report directly any error encountered while searching. 

If you look at the Google report that talks about its AI errors, this is what they assure people they’re doing – but are they really being as rigorous as AI clearly demands they should be?

Using AI models with built-in content filters also prevents the generation of indecent materials if available at every stage until final deployment.

AI Hallucinations

AI hallucinations are when the system creates plausible-looking but unverified information. It has been a significant challenge to generative AI – if not one of the most significant – and it can result in the dissemination of fake news that can be extremely consequential for sectors like healthcare or finance. One example is the Google AI Overview feature telling users it’s advisable to eat one rock each day.

To prevent AI hallucinations, AI models should have stringent control mechanisms in place. One way could be to incorporate fact-checking algorithms to validate outputs against known truths and facts by comparing them with one another, such as in news articles where there are multiple versions on the same topic, just depending upon who wrote it. Though each will claim theirs is the “true” version, AI must define which is.

The occurrence of hallucinations may also be reduced by adopting conservative settings, which limit the extent of creative generation by AIs.

AI Bias

Ok, all of these are the top worries, but if you look at the news, AI bias is the one up there with the top concerns. Bias is a widely documented problem in AI systems where models reflect prejudices inherent within their training data, leading to discriminatory conclusions.

To mitigate AI bias risk, it is necessary to have diverse and representative datasets for training purposes. That involves implementing fairness algorithms that identify and correct biases, ultimately making artificial intelligence more equitable.

Conducting regular audits for bias and involving various teams throughout the development process can also help, ensuring a broader perspective and minimizing biased results from being produced in the future. Transparency in artificial intelligence operations, including clear documentation on data sources and decision-making processes, is becoming central to building trust and accountability.

There’s no point in trying to ignore the risks of generative AI. But you shouldn’t dismiss it because of them either. If you want to get the most out of it, learn from Google’s mistakes: mitigating the risks while this technology is still developing is essential.

Subscribe

* indicates required