Is AI Mania Over?
The prophets of Artificial Intelligence are meeting their own worst enemy: Themselves.
From the manipulation of posts on Elon Musk’s careless X platform to the dishonest bots exposed by NewsGuard on its Reality Check service, misuse of the AI technology pollutes the promise touted by its champions.
Gary Marcus, a prominent AI skeptic, laments a story he wrote for WIRED magazine predicting the euphoria over AI will collapse in 2025. “Clearly,” he wrote on his Marcus on AI Substack blog, “I got the year wrong. It’s going to be days or weeks from now, not months.”
One sign all’s not well in the AI world: the recent stock market rout in which traders bailed from high-flying tech stocks like Nividia, fearful that AI mania obscenely inflated the true value of the maker of chips needed for AI. There are other signs of trouble, too.
NewsGuard’s Reality Check, a truly valuable service for anyone interested in media credibility, recently exposed a fake news site in a Substack post called USNewsper.com is not American or News; It’s Lithuanian AI!.
“Despite advertising itself as a trusted eye on all things happening in America,” Reality Check reporters discovered, “US Newspaper is a foreign-backed website using AI and fake followers on Musk’s X to spread misinformation targeting Democrats after President Joe Biden’s withdrawal from the 2024 presidential race. The site published content to multiple social media platforms including X where its articles are promoted by multiple fake accounts, apparently created to amplify its reach” and inflating its perceived popularity.
One example the NewsGuard reporters highlighted was a US Newspaper story published on July 25, four days after Biden announced he was dropping out of the race and endorsing Vice President Kamala Harris.
“The site published an article claiming that Biden still intended to run for reelection and that Harris has consistently denied any interest in challenging Biden for the nomination,” the story written by NewsGuard journalists said. In the truth-stranger-than-fiction department, former President Trump almost simultaneously posted an eerily similar story on his Truth Social website.
“Using screen names on X such as ‘Eloise Ellis’ and “Tamara Brinkley’ along with profile pictures and posts that show signs of being AI-generated, these users exclusively share and engage with US Newspaper articles, creating the impression of reader engagement that lures ad dollars supplied by computerized ad programs.
Actually, NewsGuard reported, the US Newspaper site is run by someone named Tomas Dabasinskas, a senior executive at Squalio, a Lithuanian internet and data service group, according to his LinkedIn profile. Mr. Dabasiinsksa didn’t respond to calls placed by Reality Check reporters.
X and Musk, who openly backs the presidential candidacy of Donald Trump, are not the only ones promoting biased personal or political agendas on social media platforms. The Wall Street Journal Friday published a story saying that intelligence officials warn that the 2024 presidential election could face an “unprecedented flood of fake news fueled by AI from foreign actors.” AI-generated misinformation already infects political campaigns.
The newspaper investigated TikTok, the app that many young people use for news, and found thousands of videos with political lies and hyperbole. The newspaper’s inquiry traced many of the videos to a complex web involving China, Nigeria, Iran, and Vietnam.
Fake news stories in the media are nothing new. Stories with doctored photos and text have been around since the days of yellow journalism popularized by icons like Joseph Pulitzer and William Randolph Hearst. The Journal’s reporters found that AI takes fake news to a new level, making it “trivially easy to splice together clips and write and voice scripts at little cost.”
“Anyone with five dollars and a credit card can do this,” Jack Stubbs, chief intelligence officer of the research firm Graphika, told the Journal’s reporters.
Musk and those who defend him say X has a wide range of posts both credible and questionable that represent freedom of speech in action. That’s true. What’s playing out on sites such as X goes beyond free speech, though. Reports like those from NewsGuard and the Journal expose a lack of standards, solid editing, and journalistic principles. Creating fake accounts to give a false impression of audience engagement is not free speech; it’s dishonest manipulation designed to mislead the public.
The practices undermine the integrity of all news organizations and capitalize on the mechanics of computer-generated ad models to divert revenue from financially strapped publishers into the pockets of journalistic charlatans.
It is easy to blame all of this on Artificial Intelligence. AI technology, after all, amplifies the problems and makes it easier to flood the media with misinformation. But AI itself is just part of the trouble and not the important part. The real problem lies with people such as Musk, who touts X as a beacon of free speech but treats honesty and public trust like second-string values. Musk has laid off or fired many former employees of X, formerly known as Twitter, who had been hired to spot questionable practices such as those flouted by US Newspaper.
AI experts such as Richard Boyd, a founder and CEO of Ultisim, a North Carolina high technology company that uses AI, warns the industry against removing human beings from the loop when implementing AI, a technology with pros that far outweigh the cons if properly deployed. He says successful and beneficial AI is used in health, education, and many other fields.
The fake stories are not limited to politics and can literally put words in someone’s mouth. NewsGuard’s Reality Check reporters uncovered a questionable music video in the days leading up to the Olympic opening ceremony purportedly by Little Big, a popular Russian rave band that left Russia for California because of the band member’s opposition to Vladimir Putin’s attack on Ukraine.
“The video ridiculed the Games, which had barred Russian participation because of its invasion of Ukraine,” the NewsGuard report said, “by showing athletes hurdling over bags of trash, Paris Mayor Anne Hidalgo shooting immigrants and French President Emmanuel Macron doping Western athletes.”
“Paris 2024, did you find a Paris whore? Paris, Paris one two three, go to Seine and make a pee,” Little Big’s leader singer, Ilya Prusikin, appeared to sing to the camera.
A spokesperson from Little Big told NewsGuard that the video was not a creation of the band and that it was not involved in the production or distribution of the video. A Russian news site named Agents.media told NewsGuard reporters that AI was likely used to depict some of the people in the video and that sham accounts on X were used to amplify the video, which received more than 7.1 million views and 2.4 million retweets.
Marcus says AI might not be the problem that everyone initially feared when Open AI, the company that generated widespread optimism about the technology, introduced its flagship model in November 2022.
“Virtually every company raced to find ways of adopting ChatGPT “Or similar AI technology made by other companies into their business. Wall Street reacted by aggressively bidding up the stock prices of many companies that could slap on AI label in its name or on its product.
“There is just one thing,” Marcus writes. “Generative AI (the kind that mimics human activity) doesn’t actually work that well, and maybe never will. To be sure, Generative AI itself won’t disappear. But investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts. Companies that are currently valued at billions of dollars may fold or be stripped for parts. Few of last year’s darlings will ever meet recent expectations, where estimated values have often been a couple hundred times current earnings. Things may look radically different by the end of 2024 from how they looked just a few months ago.”
Marcus may be right about the financial markets discounting the value of AI. But it’s hard to see how financial market skepticism will stop those who want to use misinformation for political gain. Money isn’t the issue in a polluted political market. Misinformation is the currency.
—James O’Shea
James- I appreciate this in depth perspective on AI. I never thought I’d live to see the day when we have to talk about this and yet here we are. Glad you brought it up.
I hope you're right, but fear we're in an environment you wrote about in an earlier millenium noting that a big majority of people everywhere are unwilling to pay for good information. They prefer information that's interesting and free. That's a problem even bigger than AI and less susceptible to technological solutions.