
Este artículo estará disponible en español en El Tiempo Latino.
Misinformation is thing new. It has, however, go ubiquitous and, successful immoderate cases, much hard and time-consuming than ever to debunk.

When we archetypal started publishing successful 2003 — which predated Facebook (2004), YouTube (2005) and Twitter (2006) — viral misinformation took the signifier of concatenation emails. Although they were a occupation astatine the time, concatenation emails were to misinformation what the Pony Express is to ChatGPT.
As the popularity of societal media platforms has grown, truthful excessively has the scope of viral misinformation and the velocity with which it travels. And this falsehood-fraught situation is progressively wherever radical get their news.
In a survey of U.S. adults past year, the Pew Research Center found that “just implicit fractional of U.S. adults (54%) accidental they astatine slightest sometimes get quality from societal media.”
The unthinkable maturation of podcasts besides has helped spread misinformation connected societal media. According to the Pew Research Center, 42% of Americans 12 and older said they had listened to a podcast successful the past period successful 2023 — up from lone 9% successful 2008. In February, YouTube — the largest video level — announced that it had much than 1 cardinal monthly podcast users.
The emergence of artificial intelligence, oregon AI, makes it adjacent much hard for societal media users to abstracted information from fiction.
“AI technologies, with their capableness to make convincing fake texts, images, audio and videos (often referred to arsenic ‘deepfakes’), contiguous important difficulties successful distinguishing authentic contented from synthetic creations,” Cathy Li and Agustina Callegari of the World Economic Forum wrote past twelvemonth successful an article connected however to combat AI misinformation.
Our enactment aims to pass the nationalist and debunk governmental falsehoods. But we can’t fact-check everything. Here’s our proposal connected however to place bogus posts and factual distortions.
Think earlier sharing. We person agelong advised our readers, “Be skeptical, not cynical.” When it comes to online content, that means: Think doubly earlier you stock that societal media post.
“Don’t deed reshare until you halt and deliberation to yourself, ‘Am I reasonably definite that this is close … does this look plausible?’” David Rand, a prof of encephalon and cognitive sciences astatine MIT, told PBS Newshour past year.
We cognize that this tin beryllium hard to do, peculiarly if the contented evokes a almighty effect successful you and aligns with your beliefs — which is often the case. There’s 2 reasons for that:
- Seeking clicks, contented providers springiness america text, images and videos that often provoke a absorption from us.
- Using algorithms, societal media platforms provender america what they deliberation we privation to spot and hear.
As a result, societal media posts often play to our emotions, and, arsenic humans, we are susceptible to confirmation bias — which is the inclination to springiness excessively overmuch value to accusation that confirms our beliefs. The operation of the 2 makes misinformation spell viral.
But defy the impulse to instantly reshare.
Consider the source. Who shared the claim? What bash you cognize astir this idiosyncratic oregon organization? Do they person immoderate partisan oregon fiscal conflicts? What qualifies them to constitute oregon talk astir the subject?
We’ve seen a batch of misinformation from radical who gully conclusions and stock opinions, contempt a deficiency of expertise successful the taxable oregon a wide struggle of interest, oregon both.
At the tallness of the COVID-19 pandemic, we debunked bogus claims astir the microorganism from several chiropractors. A aggregate offender, chiropractor Eric Nepute, was sued by the Justice Department and Federal Trade Commission for violating the COVID-19 Consumer Protection Act. In a settlement, Nepute agreed to wage a good and halt making mendacious claims astir supplements that helium advertised and sold arsenic preventatives and treatments for COVID-19. The civil complaint said Nepute and his companies “have earned a important magnitude of wealth from selling these and different Wellness Warrior Products.”
If idiosyncratic is making claims successful an effort to merchantability you something, that’s a reddish emblem to beryllium skeptical.
Of course, we besides spot a batch of misinformation from partisans — truthful beryllium wary of liberal and conservative social media accounts making claims astir the different side.
For example, we precocious debunked the misleading assertion dispersed by President Donald Trump and blimpish commentators that Politico, an online quality outlet, was being “completely” oregon “massively funded” by the U.S. Agency for International Development nether the Biden administration. In fact, the media payments were for subscriptions that were communal astatine galore national agencies nether the Trump and Biden administrations.
Evaluate the evidence. Does the idiosyncratic making the assertion supply immoderate evidence, specified arsenic links to articles, published probe oregon different sources? Are immoderate sources mentioned, but nary links provided? How credible is the grounds provided?
It’s a reddish emblem if nary sources are provided. If sources are cited, find the root worldly and spot if the grounds supports the claim. You would beryllium amazed however often the “evidence” doesn’t enactment the claim. (Be cautious successful clicking connected links. Make definite they pb to a morganatic website.)
Last month, we did a story connected societal media posts that falsely claimed Trump ordered erstwhile Philippine President Rodrigo Duterte’s merchandise from the International Criminal Court. The posts cited “Executive Order 2025-03” — which doesn’t exist. That’s not adjacent the numbering system for enforcement orders.
You should besides cheque the credibility of the root worldly provided successful the societal media post.
We precocious debunked misleading claims astir measles successful a video posted to X by Mary Holland, the CEO of the anti-vaccine advocacy radical Children’s Health Defense that was founded by Health and Human Services Secretary Robert F. Kennedy Jr. Holland based her claims connected an nonfiction written by Sayer Ji, the laminitis of an alternate medicine website who was named successful the Center for Countering Digital Hate’s “Disinformation Dozen,” a database of apical spreaders of vaccine misinformation connected societal media. Ji has a bachelor’s grade successful doctrine from Rutgers University.
Ji’s past of spreading misinformation and his deficiency of expertise successful the country of infectious diseases are reddish flags.
In different case, we wrote astir an article successful a peer-reviewed diary that made galore mendacious claims astir COVID-19 mRNA vaccines. The nonfiction — which was aboriginal retracted — was written by known vaccination opponents who person dispersed misinformation astir the mRNA vaccines, and it was published successful a diary that did not person the aforesaid standards arsenic much reputable journals.
If the societal media station includes an representation that you fishy mightiness beryllium a fake, past you tin usage reverse representation hunt engines, specified arsenic Google and TinEye, that whitethorn assistance you find the archetypal representation and wherever and erstwhile it appeared online. We person utilized specified tools numerous times over the years.
Evidence oregon opinion? Cable TV commentators, podcasters and columnists person blurred the enactment betwixt quality and opinion.
If the grounds cited successful the societal media station comes from a quality root — oregon purports to travel from a quality source, sometimes falsely labeled “breaking news” — you should see if the societal media station is sharing fact-based reporting oregon someone’s sentiment of the news.
Everyone is entitled to their ain opinions, but we’ve recovered that galore partisan websites, podcasters and commentators – whether they are pushing a liberal oregon conservative docket — aren’t telling the afloat story. Their mentation of the facts is often slanted to payment their side.
Consult the experts. If you are inactive uncertain astir the veracity of a societal media claim, past you should consult the experts. That includes FactCheck.org — we’re on YouTube, Facebook, Instagram, Threads, X, BlueSky, WhatsApp and TikTok.
A bully spot to commencement is Google oregon the hunt engines of FactCheck.org and different fact-checking websites.
The hunt should see keywords oregon a abbreviated excerpt of the societal media post, podcast oregon video. For example, societal media posts claimed the Department of Government Efficiency stopped “royalties” to erstwhile President Barack Obama for “Obamacare,” formally known arsenic the Affordable Care Act. The top 2 results of a caller Google hunt of “royalties + Obama + Affordable Care Act” turned up articles by FactCheck.org and AFP Fact Check, a France-based fact-checking organization.
Google has besides created a instrumentality called “Fact Check Explorer” — a searchable database of fact-checking articles from astir the world. The same search for “royalties + Obama + Affordable Care Act” connected Google’s Fact Check Explorer turned up six fact-checking articles – each debunking the assertion astir Obama.
Fact-checking articles instrumentality clip to produce, truthful successful immoderate cases you whitethorn not instantly find a fact-checking nonfiction connected the topic. You may, however, find immoderate quality articles connected the taxable — but marque definite you are utilizing trusted sources, specified arsenic the Wall Street Journal, Reuters, the Associated Press, New York Times and different established quality outlets.
We cognize that trust successful the media is low, but the information is that morganatic quality organizations, specified arsenic the Washington Post and New York Times, person written policies and procedures for specified things arsenic newsgathering, editing and corrections, arsenic good arsenic standards for ethical behaviour and conflicts of interest.
Even erstwhile utilizing specified trusted sources, you mightiness privation to cheque much than 1 root to spot what others are reporting. Multiple quality outlets volition study connected breaking quality and large quality developments, truthful beryllium wary if lone 1 quality enactment is reporting connected the “news” that you are seeing connected societal media.
AI-Generated Images
As we mentioned earlier, online contented whitethorn beryllium created by generative AI, which “can make archetypal contented — specified arsenic text, images, video, audio oregon bundle codification — successful effect to a user’s punctual oregon request,” arsenic IBM explains connected its website.
We’ve already covered substance successful the conception above. The aforesaid rules use to substance created by humans oregon AI services. Here we absorption connected AI-generated images, videos and audio.
We person been penning astir fake photos for years. In the aboriginal years, the fakes were existent images that were altered utilizing Photoshop oregon different editing programs.
In 2008, for example, we wrote about an representation that purportedly showed then-Alaska Gov. Sarah Palin wearing a red, achromatic and bluish bikini and holding a rifle. But it wasn’t her. Her caput had been Photoshopped onto the assemblage of another woman.
Using AI, radical looking to entertain oregon origin mischief tin make wholly caller images, video and audio. Experts accidental you whitethorn beryllium capable to spot a fake by looking intimately for reddish flags.
“It is imaginable to make realistic appearing images, audio, and video with today’s generative AI tools,” Matthew Groh, an adjunct prof of absorption and organizations astatine Northwestern University’s Kellogg School of Management, told america successful an email. “One of the champion ways to spot a prevarication (and likewise AI-generated media) is to hunt for contradictions.”
Groh and his colleagues published a probe paper successful February that measured the accuracy of much than 50,000 participants who were asked to place whether images were existent oregon AI-generated. The participants were fixed “unlimited time, 20 seconds, 10 seconds, 5 seconds, and 1 second.” The insubstantial recovered that “longer viewing times” improved the participants’ accuracy.
Unnatural assemblage parts. Groh and his Northwestern colleagues identified telltale signs of AI-generated photos for an article past twelvemonth successful Kellogg Insight, a schoolhouse publication. They advised societal media users to look intimately astatine assorted assemblage parts for “anatomical implausibilities.”
“Are determination missing oregon other limbs oregon digits? Bodies that merge into their surroundings oregon into different adjacent bodies? A giraffe-like cervix connected a human? In AI-generated images, teeth tin overlap oregon look asymmetrical. Eyes whitethorn beryllium overly shiny, blurry, oregon hollow-looking,” the nonfiction said.
If the idiosyncratic is simply a nationalist figure, you tin comparison facial features with existing quality photos to spot discrepancies, the nonfiction besides noted.
Odd objects. There whitethorn besides beryllium oddities successful the mode that assemblage parts interact with objects, oregon adjacent problems with the objects themselves.
For example, the Kellogg Insight nonfiction included an AI-generated representation that showed a person’s manus wrong a hamburger. The hamburger itself is improbably large.
“When there’s interactions betwixt radical and objects, determination are often things that don’t look rather right,” Groh told Kellogg Insight, referring to these oddities arsenic “functional implausibilities.”
Irregular shadows and reflections. AI besides has trouble with shadows and reflections. Shadows whitethorn beryllium formed successful antithetic directions, and reflections whitethorn not lucifer the entity they unreal to reflect, Groh and his colleagues said.
For example, an AI-generated representation successful the Kellogg Insight article shows a idiosyncratic wearing a short-sleeved shirt, portion his reflector representation is wearing a long-sleeved shirt. The Northwestern probe insubstantial describes these irregularities arsenic “violations of physics.”
The researchers besides identified 2 different telltale signs of AI-generated images: “stylistic artifacts,” which notation to “overly glossy, waxy, oregon picturesque qualities of circumstantial elements of an image,” and “sociocultural implausibilities,” which are “scenarios that interruption societal norms, taste context, oregon humanities accuracy.”
Nonsensical words. Jonathan Jarry, a subject communicator with McGill University’s Office for Science and Society, explained successful an nonfiction connected the university’s website past twelvemonth that AI-generated images person occupation with words. In his article, Jarry asked an AI work to make a photograph of Montreal circa 1931. One problem, however, was that the lettering displayed connected inheritance signage was “gibberish.”
(Think you tin archer existent photos from bogus AI-generated images? Take the “Detect Fakes” test connected Kellogg School’s website.)
AI-Generated Video and Audio
Unlike fake images, bogus video and audio are reasonably caller phenomena.
We recently wrote astir an audio clip circulating connected societal media that purported to amusement Donald Trump Jr. saying “the U.S. should person been sending weapons to Russia,” alternatively of Ukraine. But we recovered nary grounds that Trump ever made specified a comment, and a integer forensic adept told america it was apt fake.
Look for contextual clues. Determine if the substance of the station oregon the audio oregon video clip itself offers immoderate contextual clues — specified arsenic wherever and erstwhile the words were allegedly spoken.
In the lawsuit of the fake Trump audio, 1 reddish emblem was the assertion that the president’s lad made his remark astir Russia connected a Feb. 25 occurrence of his podcast, “Triggered with Donald Trump Jr.” However, Trump did not marque immoderate specified remark astir Russia during that episode.
Listen for audio anomalies. The European Digital Media Observatory, a task of the European Commission, offers tips for detecting AI-generated audio and video. When listening to audio, it says to “[p]ay attraction to choices of words, intonation, breaths, unnatural pauses and different elements that tin manifest anomalies.”
Watch for prime of video. The EDMO suggests checking “the prime of the video” to spot “out of absorption contours, unrealistic features” and mediocre “synchronization of audio and video,” i.e., erstwhile the lips don’t lucifer the audio.
Look for disclaimers. Some societal media platforms — including Meta, YouTube and TikTok — necessitate users to adhd a statement connected AI-generated content. Check to spot if the level you are utilizing has specified a argumentation and, if so, look for the disclaimers.
For example, Meta, which owns Facebook, Threads and Instagram, uses an “AI info” statement “for contented we observe was generated by an AI instrumentality and stock whether the contented is labeled due to the fact that of industry-shared signals oregon due to the fact that idiosyncratic self-disclosed.”
Groh, the Northwestern adjunct professor, said that Community Notes connected X — the level formerly known arsenic Twitter — tin beryllium utile astatine flagging AI-generated content.
“Community notes tin beryllium precise utile for adding discourse and directing people’s attraction to imaginable tells,” specified arsenic this note successful effect to an representation posted pursuing Hurricane Milton, Groh said. “Likewise, discourse and insights from trusted sources similar information checkers oregon integer forensics experts tin beryllium utile for helping radical connected societal media marque up their minds astir whether what they’ve seen online is AI-generated oregon real.”
Editor’s note: FactCheck.org does not judge advertising. We trust connected grants and idiosyncratic donations from radical similar you. Please see a donation. Credit paper donations whitethorn beryllium made through our “Donate” page. If you similar to springiness by check, nonstop to: FactCheck.org, Annenberg Public Policy Center, P.O. Box 58100, Philadelphia, PA 19102.
The station How to Combat Misinformation appeared archetypal connected FactCheck.org.