Public accountability is today a pivotal issue for the high-tech sector that it discounts at its own peril. We see this play out in the media seemingly every day as certain high-profile tech companies are castigated for the undesirable societal effects of their products and services. Most vividly, we have witnessed how social media platforms can spread dangerous falsehoods about COVID. For some providers, these impacts are unintentional. For others, they result from prioritizing growth goals over everything else.
Prioritizing foundational and societal pillars
Regardless, the potentially profound effects of emerging technologies’ machine-learning algorithms, complex AI models and data biases must always command our attention.
As people become more cognizant of tech’s negative impacts, trust in it is eroding., After getting high marks in trust for decades, tech has been receiving falling marks over the last decade and seeing a precipitous drop during the last year.
A 2012 “Edelman Trust Barometer” reported a 30-point gap between the public’s trust in technology compared to business in general (77 percent versus 47 percent). Since then, trust in business has grown while trust in tech has dropped. The gap is now less than 10 points (68 percent for tech, 59 percent for business in general). Further, after a pandemic-driven surge in the first half of 2020, tech experienced double-digit declines in trust in the United States (-13 percent) and the UK (-12 percent). Worldwide, trust in tech has dropped six points, leaving it still in the top position but just edging out manufacturing and healthcare.
Perhaps the most conspicuous, and societally problematic, decline in trust in tech has been that toward medical technology – specifically toward vaccines. Even before COVID, for example, a 2019 Gallup survey reported that 84% believe it is “extremely important that parents get their children vaccinated, down from 94% who said the same in 2001.”
Certainly, vaccines can be, inherently, advanced technology: The mRNA COVID vaccines are referred to as “software for the cell.”
Trust in time
To understand the decline of trust in tech, we should look at how this trust has evolved over time. For much of human history, trust was based on interactions between people. Starting in the industrial era, trust shifted to interactions between people and institutions. Today, trust often resides in direct interactions between people and technology. Instead of having one person or one institution to blame if something goes wrong, responsibility is now spread across a wide range of sources that are difficult to identify, let alone hold accountable when something goes wrong.
Put another way, within personalized and institutionalized models of trust, individual people and institutions were the guardians of trust. But in today’s distributed trust model, the trust guardians are social media platforms, algorithms, and frameworks powered by such emerging technologies as AI, machine learning, blockchain, augmented and virtual reality, and the Internet of Things.
These technologies are powerful and have great potential for good. But they are not well understood by the average person. For example, by analyzing large amounts of data and experiences and over long periods of time, AI systems can learn to draw conclusions and make decisions by recognizing patterns that would be too complicated or nuanced for a human mind. But this complexity often means that AI systems can reach particular conclusions that are mysteries to users and sometimes even to those who created the systems.
This is a critical point because users of tech-based systems ﬁnd it hard to trust a technology they do not understand. This is especially true in ﬁelds – such as l healthcare, ﬁnance, food safety, and law enforcement — where the consequences of a flawed system or a biased system that reflects historical or social inequities are more serious than getting a bad movie recommendation from Netflix.
Moreover, unlike people—whom we mostly trust until, and unless, we have reason not to —trust in technology is not a given. Especially in an era when technologies are the new guardians of trust, the companies that make tech must now earn our trust every day.
How can tech regain its lost trust? First and foremost, it must safeguard users’ interests. Easier said than done. But just as tech can create trust problems, it can also be harnessed to solve them. Indeed, by wholeheartedly adopting emerging technologies to remedy trust deﬁcits, tech providers can rebuild and strengthen trust.
Emerging technologies can also continuously challenge companies to reimagine their business models and the way they produce products and services. To safeguard users’ interests, chief technology officers must reimagine how information and data are utilized by ensuring that the data and strategic insights generated are done so only through decision-making models that prioritize a focusing on trust. The long-term benefits to society for such action may be great; the consequences for inaction may be no less so.