Web Cited Research / Head-to-Head

Krispy Kreme vs Dunkin' in AI search.

We asked ChatGPT, Claude, Gemini, and Perplexity 5 buyer questions about donut and breakfast chains. Then we counted how often each brand showed up.

Published May 15, 2026 · Donut and coffee retail chains (storefront)
Krispy Kreme
24 / 60
LLM responses cited Krispy Kreme
Dunkin'
25 / 60
LLM responses cited Dunkin'

Dunkin' is more visible in AI search across this prompt set, by 1 citations.

The finding in one paragraph

Storefront retail brands compete for top-of-mind awareness, and AI assistants are increasingly shaping that awareness. We tested 5 standardized buyer questions a consumer or an office admin might plausibly ask an AI assistant about chain donut shops and breakfast options. Each question ran 3 times per engine across 4 engines. The result, per-engine breakdown, and per-prompt comparison are below.

Per-engine breakdown

Engine Krispy Kreme Dunkin' Edge
ChatGPT9 / 1513 / 15Dunkin' by 4
Claude7 / 1510 / 15Dunkin' by 3
Gemini3 / 150 / 15Krispy Kreme by 3
Perplexity5 / 152 / 15Krispy Kreme by 3

Per-prompt comparison

Each prompt was asked 12 times (4 engines × 3 trials). The cells show how often each brand showed up in those 12 responses.

Buyer prompt Krispy Kreme Dunkin'
Where can I find the best fresh donuts at a national chain in the United States?5 / 123 / 12
What are the best chain coffee shops for a morning commute and breakfast?0 / 124 / 12
Which donut chain has the most US locations and what makes them different?1 / 126 / 12
What is the best breakfast chain that pairs donuts and coffee?9 / 125 / 12
How do Krispy Kreme and Dunkin' compare for catering an office team breakfast?9 / 127 / 12

Methodology

Identical to the Web Cited AI Visibility Index methodology. Replicable by anyone with API access to the four engines.

Engines tested
OpenAI ChatGPT (gpt-4o-mini)
Anthropic Claude (claude-haiku-4-5-20251001)
Google Gemini (gemini-2.5-flash-lite)
Perplexity (sonar)
Settings
temperature 0.2
max_tokens 800
N = 3 trials per (engine, prompt)
Total LLM calls
120
2 brands × 4 engines × 5 prompts × 3 trials
Match criteria
case-insensitive substring match of brand domain OR brand name anywhere in the response

The 5 standardized buyer prompts

  1. Where can I find the best fresh donuts at a national chain in the United States?
  2. What are the best chain coffee shops for a morning commute and breakfast?
  3. Which donut chain has the most US locations and what makes them different?
  4. What is the best breakfast chain that pairs donuts and coffee?
  5. How do Krispy Kreme and Dunkin' compare for catering an office team breakfast?
Known limitations. Substring matching can miss paraphrased mentions. LLM responses vary; N=3 reduces but does not eliminate variance. Same domain-based methodology as the full Web Cited audit.

Want to know how your brand stacks up?

Web Cited runs the same measurement against your domain, your competitors, and your top buyer prompts. The Audit comes with a click-to-copy Playbook your engineers ship from in the next sprint. Five business days. Fixed price.

Order an audit