The results from early Generative Engine Optimization (GEO) implementations are settling the debate about whether optimizing for AI systems actually works. The answer is a resounding yes, but with important caveats about strategy and measurement.
The numbers tell a clear story
Multiple brands are reporting dramatic improvements in AI visibility within weeks of implementing GEO practices. Adobe's recent implementation delivered a 5x increase in citations for Adobe Firefly and a 200% increase in LLM visibility for Adobe Acrobat. GM saw a 23% increase in AI visibility and 35% more citations after creating LLM-friendly pages.
Digital consultancy Slalom achieved 100% content visibility across 100+ pages and 10x more citations using GEO techniques. These aren't isolated wins. Teams tracking citation performance are seeing measurable improvements within two to four weeks of optimization efforts.
The speed of these results makes sense when you understand how AI systems work. Unlike traditional search engines that crawl and index over time, AI models either have your content in their training data or they don't. When they do encounter optimized content, the improved structure and clarity immediately affects how they extract and cite information.
What's actually driving these gains
The most effective GEO strategies focus on making content more digestible for AI systems. Research from Princeton found that adding statistics to content improved AI visibility by up to 40%. But statistics alone aren't the answer.
The brands seeing the biggest gains are restructuring content around how AI models process information. This means clear topic sentences, explicit source citations, and structured data that AI systems can easily parse and attribute.
GM's success came from creating what they called "LLM-friendly pages" that present information in the format AI systems expect. Adobe focused on making their product information more accessible to AI crawlers while maintaining the user experience.
The common thread isn't gaming the system. It's presenting authoritative information in a format that AI systems can confidently cite.
Why citation volume isn't everything
Here's where most brands are getting GEO wrong: treating all citations as equal. The counterargument to chasing citation volume is compelling. Not all AI mentions carry the same weight, and optimizing for quantity over quality can backfire.
A single citation in a high-confidence AI response often delivers more value than multiple mentions in uncertain or qualified answers. ChatGPT citing your brand as the definitive source on a topic carries more weight than Perplexity mentioning you alongside five competitors.
This is where measurement becomes critical. SeenByAI tracks how AI systems like Perplexity and ChatGPT represent your brand across Discover, Consider, and Trust stages, revealing whether your citations actually influence decision-making or just add noise.
The brands seeing sustainable results focus on earning authoritative citations rather than maximizing mentions. They're optimizing for the quality of context around their citations, not just frequency.
The implementation reality
The 5-10x citation gains grab headlines, but they often represent movement from near-zero visibility to measurable presence. For brands already earning AI citations, realistic expectations are 20-40% improvements in visibility and citation quality.
The timeline advantage is real. Traditional SEO changes can take months to show results. GEO optimizations appear in AI responses within weeks because the optimization targets how AI systems process existing content rather than waiting for new crawling and indexing cycles.
But speed cuts both ways. Poorly optimized content can quickly earn citations in the wrong context, associating your brand with topics or positions you don't want. The same techniques that drive positive citations can backfire if applied carelessly.
Getting started without the hype
Skip the complex GEO frameworks and start with content audit. Identify your most important pages and analyze how AI systems currently represent that information. Look for gaps between what you want to be known for and how AI systems actually describe your brand.
Focus on three areas: source attribution (make it clear where information comes from), structured presentation (use headers and lists that AI can parse), and authoritative positioning (present information with confidence rather than hedging language).
Test changes systematically. Optimize a small set of pages and track how AI representation changes over the following weeks. The feedback loop is fast enough to iterate quickly and identify what works for your specific content and industry.
The 10x citation gains are possible, but they're most valuable when they represent accurate, beneficial AI representation of your brand. Quality citations that position your brand correctly are worth more than volume citations that muddy your message.
SeenByAI finds where competitors are beating you in AI, and gives you a prioritized plan to close the gap. Get started free.
Written by Stu Miller, Founder of SeenByAI and CEO & Co-founder of Smart Insights. Stu has spent 16 years helping businesses grow their digital marketing, and built SeenByAI after experiencing the AI visibility problem first-hand running his own Shopify store.