In marketing, most new channels start like this: a rush of conviction, investment poured in, and then the big question… How do you measure this?
15 months ago the concept of optimizing for visibility in AI Search was nascent. GEO as an acronym didn't exist in the mainstream. But this past year has seen an explosion of the concept: fast forward to today and every C-Suite is asking their marketing team what their GEO Strategy is. Though, as much as marketers can agree that "GEO is a thing," we're far from consensus on what to actually measure, let alone the best ways to measure it.
With AI visibility measurement companies like Scrunch, the first real measurements have started to show up and the GEO KPI set has finally wiggled its way into reporting decks. And when measurement arrives, it changes the conversation.
New approaches: measuring the actual impact of earned media
Countless third-party studies have corroborated an important stat: the vast majority (over 80%) of AI citations come from earned media (vs owned or paid media). This has been hugely helpful for marketers in guiding strategies around where to focus efforts. But beyond "knowing you're rowing in the right direction," it's been impossible to measure the actual citation lift driven by a piece of earned media (or several earned hits).
In January, we partnered with Scrunch, and started collecting data on the thousands of stories that Stacker distributes for brands. Three months in, and we ran our largest GEO study to date to answer a deceivingly simple question:
How much does distributing your content as earned media drive lift in AI Visibility?
Inside the Study: Methodology and Scale
We measured 87 stories distributed across Stacker's publisher network, representing 30 distinct brands. Then we queried 8 AI platforms, using ~30 prompts per story, and took a post-distribution snapshot.
We measured two things at the same point in time:
- Citations to the brand's owned domain, and
- Citations to publisher sources in the Stacker network
This included two levels of analysis: story-level (n=87) to understand lift and variance, and brand-level (n=30) to understand share-of-visibility.
There's almost no controlled measurement of AI citation behavior at this scale to date. As the distribution system of record for hundreds of organizations to distribute their editorial at large, Stacker is uniquely positioned to track the actual impact of distributing content on AI Visibility and citations.
Review the full methodology outlined in the study.
Why We Ran This Again, But Bigger
Stacker ran a smaller version of this research in Q4, 2025.
The first study measured eight stories and showed an average 325% lift in AI visibility through distribution. That result justified expanding the work and raised a more significant question: was that number a repeatable pattern, or a small-sample spike?
So we scaled the study by more than 10x– from eight stories to 87.
The expanded study shows a median lift of 239%, versus the original 325%. But what's exciting here is the scale. Across a significantly larger volume of stories studies, we start to map a more robust picture of the predictable impact on distributing your content to 3rd parties.
This dataset surfaced patterns the first study could not:
- brand visibility segmentation
- the coverage breadth advantage across platforms
- the near-universal 97% citation rate
- the "sole source" outcome in roughly 1 in 5 stories
The larger sample also enabled us to prove statistical significance across multiple methodological stress tests.
Eight stories gave us a signal that this was something to investigate further. Eighty-seven stories give us the confidence to point to this as something that every brand needs to be paying attention to.
Coverage Breadth Is Your New Authority Signal
Before we even got to the "owned vs. earned" debate, something else became obvious.
The most meaningful GEO outcome isn't "did we show up in one model." It's:
Do we show up across models?
That's what we're calling coverage breadth: not just whether a story gets cited, but how consistently it shows up across different AI engines and different types of questions.
This matters because "AI search" isn't a single place or a single keyword. It's an ecosystem. A ChatGPT user is not the same person as a Perplexity user. Gemini behavior isn't identical to either. And now that everyone is switching over to Claude, you can't hedge your bets optimizing for one platform. Similarly, questions asked on AI search engines tend to be much more specific than keywords.
There's no buying your way to visibility using keyword saturation anymore. Instead, you need to make sure your content covers a large array of questions and that your brand gets cited as a source for as many of them as possible. The depth and breadth of your platform coverage is your authority measure, serving as somewhat of a new DR.
And in the data, distribution changed that in a way that should reframe how GEO teams measure success.
Syndication increased cross-platform AI coverage from 5.4% to 17.9% at the median, nearly tripling how consistently brands surfaced across AI platforms.
That matters because AI visibility isn't a single ranking position. It's a probabilistic game played across platforms, prompt variations, and answer formats. Broader coverage means more surface area, more chances to be cited, and a more consistent AI presence.
This is something that is notably absent from most GEO playbooks. People talk about "visibility," but they rarely measure it in a way that matches the full landscape and reflects the fragmented search landscape.
What the Data Actually Shows About Owned Content
Here's what we saw:
97% of Stacker-distributed stories earned at least one AI citation.
64% of these AI citations came from third-party publisher sources.
When a story was distributed with Stacker, distributed versions were 5.3x more likely to be the sole source of a story's AI visibility than the brand's own site.
That doesn't mean your site doesn't matter. It means the channel most GEO strategies are built around– owned– isn't the end of the road for brands looking for actual success in increasing their AI visibility. Great owned content is merely the starting line. Distribution and earned media wins are the parts of the equation that make an actual difference.
Most brands are optimizing, reporting, and obsessing over the part of the system they control… while the system is rewarding the part of the web they don't control. It's more efficient to spend on third-party sources when they are occupying a majority of the responses.
If you were around for early SEO, you've seen this movie before. People over-indexed on on-page factors until external authority signals became impossible to ignore.
A Rare Opening for Challenger Brands
Traditional SEO rewards accumulated domain authority, which is what makes it so hard to displace the category leaders or for smaller brands to have a meaningful search presence. Incumbents compound and newer brands are left to fight an uphill battle.
This study highlights a new opportunity at the brand level.
Earned media via Stacker-distribution's share of total AI visibility was highest where a brand's own domain coverage was lowest. The network carried the most weight for brands with the smallest existing AI footprint.

That points to how AI citation ecosystems are forming: models weight relevance, recency, and specificity more than accumulated domain authority. A current, specific story distributed through credible publishers can surface ahead of a much larger competitor.
This creates a window in GEO that organic search rarely offers.
The window will narrow. As more brands invest in distribution infrastructure and more categories saturate, advantages will concentrate and the cost of competing will rise.
A Note on Causation
Okay, but is this causation or correlation?
This is observational data. We can show correlations and patterns, not definitive causation.
That said, a few findings hold up strongly:
The 97% vs. 82% citation rate advantage is statistically significant (p < 0.006), and coverage breadth is the most consistent strategic signal in the study.
Four Takeaways for CMOs Building GEO Strategy
If you're a VP Marketing or CMO and someone on your team is building a GEO plan, here's what I'd take from this immediately:
-
Owned content alone is not a GEO strategy. It's the baseline.
-
You need to measure coverage breadth across platforms to accurately understand your brand's current search presence.
-
Distribution network quality matters as much as content quality for AI visibility. Publisher authority is doing work that owned channels can't replicate at scale.
-
The earliest brands have the most leverage. The data shows the network's share of citations is highest where brand domain coverage is lowest. If you're not showing up much today, you're not "behind" (we're still early days), rather you're at an inflection point where you can get ahead of the curve.
What Comes Next
GEO is the most interesting measurement problem in marketing right now.
Not because it's trendy. Because it's unsettled—and the people who move from intuition to data first will write the playbook everyone else follows.
We're going to keep publishing what we find. Next questions we're already digging into:
- How results vary by platform
- Whether citation lift holds over time
- Which content types reliably expand cross-platform coverage
The brands that win in AI search won't be the ones who waited for certainty.
They'll be the ones who acted when the first real data showed up.
Noah Greenberg is the CEO of Stacker, the first content distribution platform built for earned reach. He's led the company in redefining how brands and publishers collaborate, with over 4,000 news outlets using Stacker to enhance coverage. A Forbes 30 Under 30 honoree, Noah previously helped scale Graphiq, later acquired by Amazon.
Featured Image Credit: Photo Illustration by Stacker // Canva