I Don’t Trust AI Either — But It Just Out-Researched Me in 3 Minutes

I Don’t Trust AI Either — But It Just Out-Researched Me in 3 Minutes
Nesting dolls made of newspaper peeling open to reveal social media screens inside, with Educated Confusion watermark

How a conspiracy video became a case study in AI-assisted fact-checking

A buddy sent me a video the other day. You know the kind — a guy talking to camera, an ominous image on his screen, bold claims about geopolitics and shadowy forces. The specific claims don’t matter as much as what happened next.

I asked my personal AI assistant, Josui, to help me fact-check it.

And before I could have finished typing my first Google search, it had already done the work.

The Setup

The video made several layered claims — a fabricated quote attributed to a world leader, a conspiracy theory about an assassination, allegations of trafficking, and ties to ancient demon worship. It was the kind of content designed to overwhelm your ability to verify it. By the time you’ve debunked claim one, claims two through five have already burrowed into your brain.

This is how modern misinformation works. It’s not one lie. It’s a Matryoshka doll of lies, each nested inside the other, each requiring its own research thread to untangle.

What the AI Actually Did

Here’s where it gets interesting — and where my own assumptions got challenged.

In roughly two to three minutes, the AI:

  • Ran approximately 15 distinct web searches across multiple angles of the claims
  • Consulted and cross-referenced over 15 sources including Snopes, PolitiFact, Lead Stories, Check Your Fact, CNN, the Southern Poverty Law Center, the Times of Israel, the Jerusalem Post, NBC News, NPR, PBS, Wikipedia, the Associated Press, the Forward, and IBTimes
  • Used browser automation to navigate past a Facebook login wall and actually view the video — taking screenshots and analyzing the visual content, including an image displayed on the creator’s screen
  • Traced a fabricated quote from its actual origin (a conspiracy website in 2015) through its gradual embellishment (a false attribution to a legitimate newspaper) to its current deployment in viral content a decade later
  • Identified that foreign disinformation networks — Russian and Chinese, according to NBC News and researchers — had amplified the same narratives

Every single major claim in the video was debunked, with sourcing, by multiple independent fact-checking organizations.

The Part That Should Make You Uncomfortable

Here’s the thing — I don’t fully trust AI either. I get it. The hallucination problem is real. The confident-but-wrong answers are real. The lack of transparency in how models reach conclusions is a legitimate concern.

But here’s what I can’t ignore: I watched an AI assistant do in two to three minutes what would have taken me an hour or more of manual searching. And it didn’t just search faster — it searched wider. It cross-referenced sources I wouldn’t have thought to check. It found the original 2015 article where the fabricated quote first appeared. It identified the conspiracy website’s own editor admitting that 30-40% of their content is deliberately false.

Could I have found all of that on Google? Eventually, yes. Would I have? Honestly — probably not. I would have checked two or three sources, satisfied my curiosity, and moved on. The AI checked fifteen.

The Real Confusion

And this is where I land in that familiar place — educated confusion.

We live in a moment where:

  • The tools best equipped to fight misinformation are the ones people trust the least — AI
  • The tools people trust the most for “doing their own research” are the ones most easily gamed — search engines and social media algorithms
  • The people creating conspiracy content understand information architecture better than most of the people consuming it — they know how to layer claims so deep that casual fact-checking can’t reach the bottom

The guy in the video claimed a quote was published in a major newspaper. It wasn’t. But by the time you’ve Googled the quote and found it repeated on a hundred websites, it feels true. That’s not research — that’s circular confirmation.

The AI didn’t fall for that. It traced the provenance. It found patient zero — the first website to publish the fabrication — and followed the mutation from there.

So What Do We Do With This?

I’m not here to tell you to blindly trust AI. That would be stupid, and it would miss the point entirely. AI models can and do get things wrong. They require verification. They’re tools, not oracles. But their use of tool, like search, to ground them, helps… a lot.

But I am here to say that dismissing AI as a research tool while trusting your own two-thumbs-and-a-search-bar approach is also not the move. Especially when the misinformation you’re trying to debunk was specifically designed to exploit the limitations of casual manual research.

The uncomfortable truth is that AI, used critically, is currently one of the best misinformation defense tools available to regular people. Not because it’s always right — but because it’s fast enough and wide enough to actually compete with the speed at which misinformation spreads.

Your move.

One comment on “I Don’t Trust AI Either — But It Just Out-Researched Me in 3 Minutes”

  1. Sources and Fact-Check: The Video Referenced in This Post

    The original video is a Facebook Reel by Ryan Matta (https://www.facebook.com/reel/4351702515101960) which makes several layered conspiracy claims about the assassination of Charlie Kirk. Here are two of the specific claims and the fact-checking sources that debunk them:

    Claim: Israel/Mossad orchestrated Charlie Kirk’s assassination
    Former Mossad director Yossi Cohen stated “Absolutely not, we had nothing to do with it” (Jerusalem Post). Netanyahu called the accusations “insane” and “propaganda” (Times of Israel). Police say 22-year-old Tyler Robinson confessed, with DNA on the rifle, bullet casings, security footage, and text messages as evidence. Investigations by PolitiFact, Snopes, and the Associated Press confirm there is no evidence connecting Israel or Mossad to Kirk’s murder. The Southern Poverty Law Center and ADL documented how these claims are rooted in antisemitic tropes.

    Claim: The fabricated “Golden Calf” quote attributed to Netanyahu
    The quote (“America is a golden calf and we will suck it dry…”) first appeared in a March 2015 article on Veterans Today — no evidence it existed before that publication. Veterans Today’s own senior editor Gordon Duff admitted in a 2012 interview: “About 30% of what’s written on Veterans Today is patently false. About 40% of what I write is at least purposely, partially false.” Snopes rates this claim “Unfounded”. Lead Stories also found no evidence supporting the quote.

    According to NBC News and NewsGuard research, Russian, Chinese, and Iranian state media mentioned Kirk 6,200 times in the week following his death, amplifying these false narratives.

Leave a Reply

Your email address will not be published. Required fields are marked *