As part of our publisher- and reader-first approach to selecting and editing the stories we syndicate, Stacker is kicking off Q2 of 2026 with a more rigorous approach to AI-generated content, which has long been prohibited on our wire.
This was a multi-step effort that ensures we preserve the quality of stories available on the wire and the trust we have with our publisher partners.
A reinforced approach to content review
The first part of this effort involved reaffirming our policy barring submission of AI content for syndication and communicating that to all clients— directly via email, blog posts like this, and in onboarding calls, as well as increasing its prominence in the editorial standards all clients receive.
The second part involves actively screening submissions to flag content for possible AI-generation. After a trial period of screening stories we identified as containing a significant number of words and phrases common in AI-generated content, we now automatically screen stories for AI content when they are submitted for approval.
Our screening vendor is Pangram, which has performed well in third-party evaluations.
That said, it’s important to note that false positives do happen and any AI checker is a tool for indicating the likelihood a story was AI-generated. It is evidence, not proof.
Where tools end and editors step in
That’s why Stacker’s editors do a secondary screening of any story flagged for potential AI content and make a human determination based on many years of newsroom editing and reporting experience.
When that secondary screening leads us to conclude there’s probably fire where the checking tool smelled smoke, we alert the client and give them the opportunity to revise and resubmit the story using human writers and editors.
Is that a perfect process?
Absolutely not. But it’s a fair and rigorous process that protects the interests of our publishers and readers to the fullest extent possible.
Why this matters
Let’s pause here a moment to consider one of the most disastrous consequences of AI-generated content: “hallucinated” information, including made-up data and quotations from non-existent sources.
We are proud that our editing process helps guard against syndicating phony data and quotes, whether they are generated by AI or are made up by a dishonest writer.
When clients submit data-driven stories, we check the sourcing in all instances. If the client does not provide a source link for a given data citation, we remove it from the story. We also check to make sure stories don’t cite phony people, places and things, or quote made-up people.
This is not to say our process is infallible. No process is.
But it should give our publishers and their readers confidence that Stacker-syndicated stories are all vetted by experienced human editors who apply high editorial standards.
There are several other issues with AI-generated content, of course. One is that AI has tended to produce low-quality writing. But it’s getting harder to detect the differences between human and machine-written text as LLM models grow increasingly sophisticated.
That’s why our focus on high editorial standards and human review is so important: Even if AI content slips past a screening tool, Stacker’s editing process naturally weeds out potential AI slop.
A developing story
AI usage is constantly evolving, but the need to deliver factually sound stories to publishers and readers will never change. So if we meet back in this space a year from now and find ourselves in a world where respected publishers are regularly using AI tools in reporting and writing stories in their own newsrooms (something that’s already starting to happen), Stacker will still be positioned to deliver stories to our network that all publishers and readers can trust.
Frank Sennett is the Executive Editor of Stacker Connect.
Featured Image Credit: Photo Illustration by Stacker // Canva