The Erosion of Trust
Technology improves our lives but strips us of confidence that the information we share won't be abused, raising a fundamental question: Who can you trust?
Recently, during my exercise walk, I received a call from someone who claimed to be an employee of At&T, the large telecommunications company that provides cellphone and internet services for my family and me.
Photo by BJ Design Flickr
I knew something was wrong when the caller said he had contacted me because AT&T wanted to help me save money. Judging from the phone bill the company routinely sends me, that's hard to believe!
He promised to significantly reduce the amounts AT&T charged me each month if I sent him an inactive iPhone listed on my account. He sent me a shipping label from UPS.
The phone call raised significant questions in my mind. How had this man gained access to my phone details stored in the records of a company that takes extraordinary measures to protect my privacy? More broadly, what does this minor incident suggest about an average person's ability to trust the institutions to which we willingly give access to our personal information?
The call I received was a scam, and I didn’t fall for it. Decades of experience as a journalist sharpened my skepticism, making me wary of the caller’s true intent. A bit of investigation uncovered that the address on the shipping label didn’t belong to AT&T; it was for a gas station in Houston, Texas.
An AT&T employee named Ian, who refused to give me his last name, said the company employs data encryption tactics that would make it impossible for anyone to penetrate a customer’s account unless I gave the caller my PIN. This number would free up access to details about my account. I did not give him my PIN. AT&T failed to respond to further questions I posed.
Nevertheless, the incident prompted me to reflect on how technology intensifies the erosion of trust, which is rapidly becoming a defining issue of our time. We see this erosion everywhere, from President Trump's punishing tariffs on America’s long-time allies to his accusations that unfavorable news is “Fake.” Then there’s the lack of respect by rogue world leaders for borders recognized by international law. We will soon witness even more of this conduct as artificial intelligence technology grows increasingly sophisticated and widespread.
An example of the broader implications of my modest experience is the extraordinary security lapse now in the news because high-ranking government officials openly discussed sensitive national security information on Signal, a commercial messaging application.
The app gained popularity in Washington after U.S. officials discovered that Chinese intelligence operatives had engaged in the same malevolent activity that I had experienced but on a much larger scale. Chinese operatives had secretly accessed American cell phone records, allowing them to eavesdrop on the conversations of key American political figures, such as President Donald Trump and Vice President JD Vance. Government officials had flocked to Signal, believing it was encrypted, suggesting that the app would be difficult, if not impossible, to penetrate. But AT&T has an encryption system, too.
Even though the Pentagon explicitly prohibits using Signal for transmitting non-public national security information, Secretary of Defense Pete Hegseth, Vance, and other high-level officials, such as the Director of National Intelligence and the CIA, used the app in a group chat.
Incredibly, they discussed a pending military strike on Houthi rebels in Yemen who had targeted commercial and military ships in vital Middle East shipping lanes. Unfortunately, the group had mistakenly invited journalist Jeffery Goldberg, the editor of the Atlantic magazine, to participate in the chat, making him privy to a wide-ranging discussion of secret military operating details. He then wrote a story about the lapse, omitting any information he thought would endanger lives.
President Trump, Hegseth, and other government officials said no classified information was discussed in the chat and attacked the Atlantic magazine piece. Given the administration’s claims and attacks on his reporting, Goldberg published a chat transcript. If you want to form your own opinion about the accuracy of such claims, here is Goldberg’s report.
The chat reveals that they discussed the tactics and timing of air strikes before planes took off from a U.S. aircraft carrier, suggesting that American pilots would have been vulnerable to enemy attacks had the app’s security been compromised. I once covered national security and the Pentagon for the Chicago Tribune, and if the information they discussed isn’t classified, I don’t know what is.
“I guess Signal is a few steps above leaving a copy of your war plan at the Chinese Embassy, but it’s far below the standards required for discussing any elements of a war plan,” Mark Montgomery, senior director of the Foundation for Defense of Democracies, told Politico.
What deserves more reporting, though, is what implications my experience and the blunders of the Hegseth group carry for Americans' ability to trust the phone company or the Pentagon to protect them against scammers and spies.
This is more than a philosophical issue. Technological encroachment on our ability to keep personal information private occurs every day. If you want to buy something online, the company selling the product will want information such as your phone number, credit card, or even your bank details. All institutions to which you provide such information vow that it will be secure and that no one can access it.
How can we know that this is true? For example, did the American officials in the chat involving a U.S. attack on Yemen have an app from TikTok or Temu, both suspected of ties to Chinese espionage, on the same phone as their Signal app? And can a sophisticated hacker leech information from one of those apps to another?
From my admittedly limited experience with a phone scammer, I lack confidence in security guarantees. Mistakes can happen and will inevitably occur more frequently as artificial intelligence becomes more sophisticated and widespread. In other words, how will the average citizen know whom to trust, especially as AI becomes more prevalent and machines take over jobs once performed by humans?
Dr. Anastassia Lauterbach, an entrepreneur, professor, and businesswoman with extensive AI experience, told a panel organized by the Stimson Center, a non-profit, non-partisan think tank in Washington, that the concept of trust must evolve as AI forces more individuals to deal with machines or computers instead of live people.
“We need to establish a language to talk about AIs and machines. We are still using human terms to describe what we expect. In social science, for example, trust is described as the belief that another person will do what is expected,” she said during a Stimson-sponsored discussion about ethical and trustworthy considerations in an increasingly complex technological world. Now, she said, we are not only dealing with humans but also machines or a piece of computer code oblivious to the idea of trust.
Dr. David Bray, who chairs the Center’s Accelerator program that explores innovative initiatives to address global challenges, said the focus on trust raises more profound questions.
“We’ve been wildly successful at rolling out technologies that will now convince you that the responses they’ve given, whether in text, audio, or visual, appear human,” he said. But Bray said the advances allow fraudsters and scammers to exploit these technologies and fool someone into thinking they are talking to a human rather than a machine. He said we must become more adept at disclosing whether someone is talking to a person, organization, or machine. He has a different measure that helps someone think about the issue of trust raised by security breaches, small and large.
“A slightly different definition of trust, from the background of where I come from, is the willingness to be vulnerable to the actions of an actor you cannot directly control,” Bray said, adding that the actor in question could be an individual, an organization, a government, or a machine. “What has been shown,” he said, “is that we humans are willing to be vulnerable to actions we can’t control if we perceive benevolence, competence, and integrity” by the other party we are dealing with.
Bray noted that there was no way to assess the benevolence of a computer acting as though it were human. “How do people assess that?” he asked. “Given that we are all now connected digitally, how do we assess that a conversation is benevolent, competent, or has integrity? How do we do that for governments and for the world at large? Maybe this is the challenge that hits free societies in particular.” He said we lack the capacity to make these decisions at the required speed.
For now, remarks like those from Bray suggest that everyone should approach technology with a greater degree of skepticism until we develop improved techniques for evaluating its benevolence, competence, and integrity. Most experts believe that profit-driven technology companies must design artificial intelligence systems with human involvement to enhance trust.
The questions raised by Bray and Lauterbach may seem designed to answer questions about an uncertain future, but individuals face them now just as I did when trying to determine if the phone company really was on the other end of the line.
James O’Shea is a longtime Chicago author and journalist who lives in North Carolina. He is the author of several books, the former editor of the Los Angeles Times, and the managing editor of the Chicago Tribune. Follow Jim’s Substack, Five W’s + H here.
As always, thoughtful and superbly articulated. Jim's stories are gifts. Every syllable is Lindt chocolate...(now tariffed, BTW).
Well, I for one trust Jim O'Shea!