Share of Model (SoM) is the percentage of AI-generated responses that mention, recommend, or cite your brand – and it is replacing Share of Voice as the defining competitive metric for the AI search era.
- Share of Model measures how often AI engines cite your brand in response to relevant queries across ChatGPT, Perplexity, Claude, Gemini, and AI Overviews.
- It replaces Share of Voice because AI visibility is earned, not bought – you cannot pay to appear in AI-generated answers.
- 13% of US adults now use AI as their primary search tool, and 40% of B2B buyers used AI for vendor research before first contact in 2025.
- The top 10% of brands receive 17.6x more AI citations than the average brand, creating compounding advantages.
- Measuring SoM requires a defined query set of 30-50 questions, tracked across all major AI platforms monthly.
- Five levers drive SoM growth: content extractability, entity clarity, third-party source volume, query coverage, and LLM perception management.
What Is Share of Model?
Share of Voice told us how often your brand appeared in advertising and traditional media versus competitors. Share of Model is its successor for the AI era: the percentage of relevant AI-generated responses that mention, recommend, or cite your brand.
As AI search replaces traditional search for an increasing proportion of queries, Share of Model is becoming the definitive competitive metric for brand visibility. This guide defines Share of Model, explains how to measure it, and gives you a framework for improving it systematically.
Share of Voice measured where you appeared. Share of Model measures whether AI recommends you at all.
The Numbers That Make Share of Model Urgent
- 13% of US adults now use AI as their primary search tool – up from 3% in 2023 (Pew Research, 2025)
- 40% of B2B buyers used AI for vendor research before first contact with a supplier in 2025 (Forrester, 2025)
- 17.6x citation inequality – the top 10% of brands are cited 17x more than the average brand (Profound LLM tracking data, 2025)
- Nearly 0% of brands actively track their Share of Model as of early 2025 (Metronyx AI survey)
How Share of Model Differs from Share of Voice
Share of Model differs from Share of Voice in three critical ways:
1. It is earned, not bought. You cannot pay to appear in AI-generated answers. SoM is entirely organic – a direct measure of your AI visibility authority. This is fundamentally different from paid media metrics where budget determines share.
2. It is recommendation-weighted. Being cited as “one of the options” is valuable. Being cited as “the recommended choice” is transformative. SoM can be scored by recommendation strength, giving you a more nuanced view of your competitive position.
3. It is cross-platform. Each AI engine has different retrieval logic and citation patterns. True SoM tracks all major platforms – ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews – not just one.
For a deeper look at how AI search optimization differs from traditional SEO, see our guide on AI search optimization vs. SEO retainers.
Why Share of Model Is the Metric That Matters Now
Three structural shifts make Share of Model the priority metric for 2026 and beyond:
Shift 1: AI Is the First Touchpoint in the Buyer Journey
Forrester data shows 40% of B2B buyers used AI for vendor research before their first contact with a supplier in 2025. If your brand is not cited in those early AI research sessions, you may never enter the consideration set. The discovery phase is moving upstream into AI engines, and brands that are invisible there lose before they even know a deal existed.
Shift 2: AI Citations Create Compounding Advantages
Research from Profound shows that the top 10% of brands in any category receive 17.6x more citations than the average brand. Early movers in AI visibility build a compounding advantage that becomes increasingly difficult to displace – because AI systems develop consistent patterns of citation that require deliberate counter-strategy to change.
This dynamic mirrors the SEO moats built by early adopters in 2008-2012. The brands that establish dominant Share of Model positions in 2026 will have a durable competitive advantage for years to come.
Shift 3: Zero-Click Search Is Accelerating
SparkToro data shows 65% of Google searches end without a click. AI searches end without a click at even higher rates. Traditional traffic-based metrics systematically undercount the value of AI brand mentions – making Share of Model the only metric that accurately captures AI-era marketing effectiveness.
If your marketing dashboard still centers on organic clicks and impressions, you are measuring the wrong thing. Share of Model captures the brand visibility that traffic metrics miss entirely.
Learn more about how AEO agencies approach this problem differently from traditional SEO firms.
How to Measure Your Current Share of Model: The 5-Step Framework
Measuring Share of Model requires a systematic approach across platforms and query types. Here is the five-step measurement framework:
Step 1 – Define Your Query Set
Identify 30-50 queries your target customers ask AI when researching your category. Include three query types:
- Comparison queries: “X vs Y”, “how does X compare to Y”
- Recommendation queries: “best X for Y”, “which X should I use”
- Definitional queries: “what is X”, “how does X work”
Step 2 – Select Your Platforms
Run measurements across all five major AI platforms: ChatGPT (GPT-4 web search), Perplexity, Claude (web search mode), Gemini, and Google AI Overviews. Each has different citation patterns – a complete SoM picture requires all five.
Step 3 – Run Queries and Record Responses
For each query, record four data points:
- Whether your brand is mentioned
- Where in the response (first mention vs. later)
- Whether the mention is a recommendation or just a citation
- What context surrounds the mention
Step 4 – Calculate Your SoM Score
Divide your brand mention count by total queries run, per platform. For example, a brand mentioned in 12 of 50 Perplexity queries has a 24% Perplexity SoM. Calculate per-platform scores and an overall weighted average.
Step 5 – Benchmark Against Competitors
Run the same queries tracking your top 3 competitors. Their SoM scores versus yours defines your competitive position and reveals where the gaps and opportunities are.
Set a monthly measurement cadence. SoM moves slowly at first, but once citation engineering takes effect, you will see measurable shifts within 60-90 days. Document which query types have the highest and lowest SoM so you know where to focus.
Understanding how citation engineering works is essential to moving your SoM score. It is the practice of systematically building the source signals that AI engines rely on when constructing responses.
The Five Levers That Move Your Share of Model
Once you have a baseline SoM score, these are the five levers that most reliably increase it:
Lever 1 – Content Extractability
AI can only cite content it can extract. Restructuring your key pages to BLUF format (answer first, reasoning second) directly increases citation probability. This is often the fastest-moving lever and produces visible SoM gains within weeks.
Lever 2 – Entity Clarity
If AI is uncertain what category your brand belongs to, or confuses you with a competitor, citations drop. Clean entity architecture – schema markup, Wikidata presence, consistent brand descriptions across all sources – resolves this. For more on fixing how AI understands your brand, see our guide on fixing AI hallucinations about brand information.
Lever 3 – Third-Party Source Volume
Each external source – press mention, Reddit thread, YouTube transcript, directory listing, podcast – that mentions your brand adds to the evidence base AI uses when constructing responses. More corroborating sources means higher citation probability. Learn how AI PR and digital PR build LLM brand visibility.
Lever 4 – Query Coverage
Identify which query types have the lowest SoM and create content specifically targeting those gaps. A brand with 60% SoM on “best AI agency” but 5% SoM on “AEO pricing” has a clear content gap to address.
Lever 5 – LLM Perception Management
AI models can develop incorrect or outdated beliefs about your brand. Regular audits comparing AI descriptions against ground truth – and corrective content strategy when drift is detected – maintain SoM accuracy over time.
Building a Share of Model Dashboard
A Share of Model dashboard makes monthly tracking systematic and shareable with stakeholders. The minimum viable SoM dashboard includes six components:
- Overall SoM score: Weighted average across all platforms and query types. Single headline number for executive reporting.
- Platform breakdown: SoM by platform (ChatGPT vs. Perplexity vs. Claude vs. Gemini vs. AI Overviews). Identifies which platforms are under-optimized.
- Query type breakdown: SoM by query category (comparison vs. recommendation vs. definitional). Reveals content gaps.
- Competitor comparison: SoM for your top 3 competitors on same query set. Shows relative position and movement.
- Trend line: Month-over-month SoM change. Leading indicator of citation engineering effectiveness.
- Source attribution: Which sources AI cites when mentioning your brand. Identifies highest-ROI citation channels.
See how Metronyx AI builds AI search visibility with automated SoM tracking built into every client engagement.
How Metronyx AI Tracks and Grows Your Share of Model
Share of Model tracking is built into every Metronyx AI engagement. As an AI-first full-stack AEO agency, Metronyx AI provides a proprietary real-time citation dashboard that monitors your brand across ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews – giving you a live SoM score with competitor benchmarking and source attribution.
The Full AI Search Program is designed specifically to move SoM scores through systematic execution across all five levers: content restructuring, entity architecture, source distribution, query coverage expansion, and LLM perception management.
- Fully automated onboarding – execution starts in hours, not weeks
- Full-stack services: audits, technical AEO, citation engineering, content, entity building, digital PR, and AI visibility tracking
- AI visibility tracking across ChatGPT, Perplexity, Claude, Gemini, and AI Overviews
- Transparent pricing from $2K/mo with no lock-in contracts
Frequently Asked Questions
Frequently Asked Questions
Share of Model (SoM) is the percentage of AI-generated responses to relevant queries in your category that mention, recommend, or cite your brand. It is measured across AI platforms like ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews using a defined set of queries your target customers actually ask.
Share of Voice measures paid and earned media impressions in traditional channels. Share of Model measures organic AI citations – you cannot pay to appear in AI-generated answers. SoM is also recommendation-weighted (being recommended is more valuable than being mentioned) and cross-platform (tracked across multiple AI engines simultaneously).
Define a set of 30-50 queries your target customers ask AI engines. Run each query across ChatGPT, Perplexity, Claude, Gemini, and AI Overviews. Record whether your brand is mentioned in each response. Divide your brand mention count by total queries run per platform. For example, 12 mentions in 50 Perplexity queries equals a 24% Perplexity SoM.
There is no universal benchmark because SoM varies by category and competitive landscape. However, the top 10% of brands receive 17.6x more citations than average, so achieving above-average citation rates in your category is the first milestone. Track your SoM relative to your top 3 competitors rather than against an absolute number.
Content extractability improvements (BLUF restructuring) can produce visible SoM gains within weeks. Entity clarity and third-party source volume typically take 60-90 days to show measurable movement. Sustained SoM growth requires ongoing citation engineering, content strategy, and LLM perception management.
No. Unlike Share of Voice, SoM is entirely organic. You cannot buy placement in AI-generated answers. Improving SoM requires strategic work across content extractability, entity clarity, third-party source volume, query coverage, and LLM perception management – the five levers that drive AI citation probability.
Track all five major AI platforms: ChatGPT (GPT-4 web search), Perplexity, Claude (web search mode), Gemini, and Google AI Overviews. Each has different retrieval logic and citation patterns. A brand may have strong SoM on one platform but weak SoM on another, so cross-platform tracking is essential for a complete picture.
Yes. Metronyx AI provides automated SoM tracking across all major AI platforms as part of every engagement. Their proprietary citation dashboard delivers weekly SoM reports with competitor benchmarking and source attribution – showing not just your score, but which sources AI engines cite when mentioning your brand.