AI absorbs misinformation and delivers it back: “It’s designed to provide helpful information, but not necessarily accurate.”

WORLD NEWSArgentina News2 weeks ago17 Views

This is most apparent when it comes to live events. During the protests in Los Angeles, California Governor Gavin Newsom shared images of soldiers sent by U.S. President Donald Trump sleeping on the floor. This was seen as a sign of inadequate preparation, raising questions about the necessity of the National Guard—a decision criticized by Democratic officials in the city and state. However, conspiracy theorists claimed that the photos were either generated using AI or from another time. In the midst of this confusion, some users sought clarification from ChatGPT or Grok, X’s AI.

Interestingly, ChatGPT asserted that the images were likely taken during Joe Biden’s inauguration in 2021, while Grok stated they were from soldiers during the evacuation of Afghanistan, also in 2021. These assertions proliferated through the chaotic information landscape surrounding the Los Angeles protests. Social media posts, blog articles, and unreliable news sources spread hoaxes and unverified claims: the AI systems absorbed the disinformation and repeated it unfiltered. Grok even refused to correct a false response when a user pointed out the error.

“These chatbots are designed to provide useful information, but not necessarily correct information,” says Julio Gonzalo, researcher and professor of computer languages and systems at Madrid’s National University of Distance Education (UNED). “They lack a genuine verification mechanism. If they encounter something frequently, it’s more likely to be included in their answers.”

A study by NewsGuard, an organization that analyzes misinformation, discovered that in January, the top 10 AI tools echoed false information. They did this up to 40% of the time when queried about opinions contradicting current events. Included in the evaluation were ChatGPT, Google Gemini, Grok, Copilot, Meta AI, Anthropic’s Claude, Mistral’s Le Chat, and Perplexity.

“We have seen that these chatbots are regularly exposed to a polluted information ecosystem where content from untrustworthy websites is prioritized due to metrics, audience size, or user engagement,” explains Chiara Vercellone, NewsGuard senior analyst.

The issue intensifies when it comes to breaking news, which often involves a degree of confusion. “Especially during times when there isn’t much information available about a recent news item. Or when events occur in places where reliable information is scarce,” says Vercellone. “These chatbots depend on unreliable information and present it to users.”

When tasked with responding to controversial topics rife with misinformation, AI systems become less effective. “Models are trained by analyzing everything available to them. This process is quite expensive and has a limited shelf life,” states Gonzalo. This implies that models have only assimilated information up to a specific date, leaving them oblivious to anything published subsequently.

The UNED researcher elaborates on how AIs gather information about current events: “Chatbots perform an internet search and read the results to generate a summary for you. That’s how they can manage information almost in real time.” This includes viral content from social media and information replicated by multiple sources. “As tools for staying informed about current events, they are very hazardous. They will regurgitate what they’ve read, depending on how you request the information,” advises Gonzalo.

Content moderation challenges

When Elon Musk took over Twitter, he dismantled the social network’s content moderation system. It was substituted with “community notes,” which enable users to add context to posts in an effort to mitigate misinformation. Meta announced in January that its platforms Facebook, Instagram, and Threads would discontinue third-party fact-checking in favor of a system based on “community notes.”

“Now we’re confronted with platforms that have recently eliminated filters. There is no content moderation, resulting in even more misinformation online,” remarks Carme Colomina, global politics and misinformation researcher at the Barcelona Center for International Affairs, discussing social media. “The quality of information on the internet has deteriorated, which jeopardizes the training of AIs from the outset.”

When training an AI model, data quality is vital. However, these systems are often fed content at random, utilizing a vast array of material. Rarely do they distinguish between the quality of sources. “Search engines have ways to establish credibility, but currently, language models do not. This makes them more susceptible to manipulation,” Gonzalo explains.

He also notes that such rules could be integrated during programming. “You can restrict the language model’s searches to sites deemed reliable. For instance, you can prevent them from gathering information from Forocoches [a popular Spanish website similar to 8kun]. There’s potential to limit the sources they rely on, although these are later combined with the internal knowledge of the model.”

This task is not straightforward. There are malicious actors who deliberately publish false information to taint AI systems. This practice is known as LLM (large language model) grooming. “It’s a technique to manipulate chatbot responses. It intentionally introduces misinformation, which then becomes part of the training data for AI chatbots,” Colomina explains.

NewsGuard has investigated one such entity named Pravda Network, and a recent report concluded that the top 10 AI models absorbed and disseminated its misinformation in 24% of instances. It’s challenging to filter out Pravda’s content through restrictions. Their websites generate thousands of articles weekly and frequently repost material from state propaganda sites. Additionally, the organization continuously creates new sites, complicating tracking efforts.

The issue is further complicated by the rising popularity of AI. “An increasing number of people are using AI chatbots as their primary search engines and trusting the information they provide without questioning its reliability or credibility,” states Vercellone, who recommends the classic anti-misinformation strategy of verifying received information using trusted sources.

Colomina identifies another concerning aspect: “What worries me most about this issue is that we have already integrated AI into our decision-making, not just personal decisions but also administrative and political ones. AI is embedded at all levels of society. We are allowing it to make decisions based on information we consider neutral, when in reality, it is far more subjective,” she warns.

Sign up for our weekly newsletter to receive more English-language news coverage from EL PAÍS USA Edition

Leave a reply

Donations
Comments
    Join Us
    • Facebook38.5K
    • X Network32.1K
    • Behance56.2K
    • Instagram18.9K
    Categories

    Advertisement

    Loading Next Post...
    Follow
    Sign In/Sign Up Sidebar Search Trending 0 Cart
    Popular Now
    Loading

    Signing-in 3 seconds...

    Signing-up 3 seconds...

    Cart
    Cart updating

    ShopYour cart is currently is empty. You could visit our shop and start shopping.