I Got Trapped in the AI Ouroboros
A foreign news outlet fabricated a quote from me. LLMs have already picked it up.
Like most people who sometimes appear in the news, every day I read the dystopian grab bag that is the Google Alert for my name. Over the past three years, it has mostly notified me of when people who believe the lie that I wanted to censor people (but are ignoring the Trump administration’s actual censorship) write inane things about me. If I’ve recently provided commentary to a news outlet, it tells me that, too. Once, it tipped me off that I had been depicted in deepfake porn. Sometimes it notifies me of violent threats.
Yesterday, Google alerted me that The Economic Times, a large English-language Indian newspaper owned by The Times Group, tagline: “Let Truth Prevail,” had quoted me in a piece on the deepfake video President Trump shared depicting President Obama being arrested.
But the quote wasn’t mine. Here’s what The Economic Times claimed I said:
“This deepfake is political disinformation at its worst,” said Nina Jankowicz, disinformation analyst and former executive director of the US Disinformation Governance Board. “It erodes public trust, damages reputations, and poses serious threats to democratic stability.”
I’m usually happy to provide commentary to the media in my area of expertise, but I have never spoken to or corresponded with a reporter from this outlet, nor did I issue any public commentary about the Trump-Obama deepfake.
It’s possible the article itself was AI-generated; it’s attributed only to “Global Desk,” not an individual reporter. The Economic Times’ website is filled with noisy ads and is being indexed by Google. I’d bet they whipped up this otherwise passable article, slapped a quote from an expert on it to give it credence, and posted it to capitalize on the ad revenue a piece about Trump and AI would generate. “Let Truth Prevail,” indeed.
Regardless of the motivation, I wrote to The Economic Times to let them know I wanted the quote removed: “The manufactured quote is incorrect and irrelevant,” I told them, “and it negatively impacts my professional reputation.” (Yes, that quote is authentic.)
I also posted the note to Bluesky, where thousands of people interacted with it. A few folks had questions—What was wrong with the quote, other than that I didn’t say it? How did it negatively impact my professional reputation?—and there are a couple important points to make here. A teachable moment, if you will.
I would not have called the Trump-Obama deepfake “political disinformation at its worst.” First, I would probably say that “political disinformation at its worst” is the tens of thousands of image-based sexual abuse videos, colloquially known as deepfake pornography, that target women in politics.
Second, the video is not disinformation—it’s parody. Yes, it’s terrifying, and whackadoodle, and I certainly wish the sitting president weren’t posting anything like it, but it is parody nonetheless. If Trump had posted a traditional political cartoon of a political opponent being arrested, we wouldn’t focus on the medium, but the message: the sitting President of the United States is encouraging the arrest of a former President based on made-up crimes in order to distract from an ongoing scandal.
That’s the most important point here, and probably what I would have told the outlet if they had bothered to get in touch with me. Deepfakes are most dangerous when they’re convincing and when they come from a source that is trusted. I’m less concerned about the President posting an obvious deepfake and more concerned about the impact of his calls for the prosecution of perceived enemies.
I’m also pretty concerned about how this fake quote impacts me. One of the biggest baseless criticisms of disinformation researchers is that we’re all hysterical harpies who label any information we don’t like as disinformation that poses threats to democracy or national security. This fabricated quote gives my critics yet more fodder to claim that I “can’t take a joke” or that I would want to censor content like this. (For the record: Label? Probably. Censor? No.)
The article has been syndicated to MSN and other sites, and the fake quote is already being hoovered up by AI chatbots. A few hours after the article was published, an AI-enabled account on X posted this:
I had the following exchange with Perplexity, an AI-powered “conversational search engine” that draws information from web searches in addition to ChatGPT, Claude, and its own LLM:
I told Perplexity it was wrong, and linked it to my Bluesky post. Then it spat out the following:
This, of course, is another example of LLMs’ inability to reason; any researcher worth their salt would probably look at my social media profiles to see if I had shared anything relating to the article before confirming the quote’s authenticity. Perplexity, like conspiracy-minded Uncle Bob or Aunt Sally, thinks that just because something exists on the internet it must be true.
To sum up: a news outlet fabricated a quote—possibly using AI—about an article on AI, and AI chatbots hoovered up the article and are now attributing the quote as fact. As my colleague Sophia Freuden and I wrote earlier this year, “iterative relationships between large language models—that is, models being trained on AI-generated content, generating additional content, and so on—threaten to make an ouroboros of the internet.”
And now I appear to be trapped in it.






Anyone who commented along the lines of, "how is it bad/why do you care?" need to get their head examined.
And now we have the Mad King, not at all censoriously, commanding AI to not be “woke.”