Do Core Web Vitals Matter for LLMs? A Complete overview
Explores whether Core Web Vitals still matter for LLM answer engines in 2025, outlines six indirect ways CWV influence LLM visibility and SERPs, and provides a practical checklist to optimize for both SEO and LLMO.

In 2023 Google told site owners that Core Web Vitals (CWV) were among “page‐experience signals” capable of tipping competitive rankings when content quality was comparable. Fast-forward to 2025 and marketers are asking a new question: if large language models (LLMs) like ChatGPT, Perplexity, Gemini or Microsoft Copilot increasingly answer queries directly, do CWV still matter? The short answer is yes—but not for the reason you might think. Faster, more stable pages rarely influence an LLM’s selection algorithm directly, yet they play several indirect roles that can make or break your visibility in both classic SERPs and AI-generated answers. This complete overview breaks down the evidence, clarifies misconceptions, and gives you a pragmatic optimization checklist for the dual SEO + LLMO era.
1. Core Web Vitals Refresher (in 60 seconds)
Google’s CWV framework measures real-world user experience across three metrics:
Metric | What It Measures | 2025 Good Threshold |
Largest Contentful Paint (LCP) | Loading time of the main content block | ≤ 2.0 s |
Cumulative Layout Shift (CLS) | Visual stability during load | ≤ 0.1 |
Interaction to Next Paint (INP) | ||
(previously FID) | Responsiveness to user input | ≤ 200 ms |
Google gathers these signals from CrUX (Chrome UX Report) data. They remain minor ranking factors in traditional search, but—as we’ll see—LLMs treat them differently.
2. How LLM Answer Engines Retrieve and Rank Pages
Understanding the retrieval stack is key to knowing where CWV might—or might not—matter.
Crawling & Caching – Bots such as ChatGPT’s Links Reader, Perplexity’s scraper, or Google’s AI Overview crawler fetch HTML, text and structured data. Most do not render full page resources (JS, fonts, ads) unless needed.
Pre-processing & Chunking – Pages are split into token-sized chunks (e.g., 2 k–8 k tokens), metadata and canonical links are stored.
Embedding & Indexing – Content blocks are embedded into vectors for semantic retrieval. Additional attributes (freshness, authority, citations, anchor text) are attached.
Retrieval & Synthesis – At query time, relevant chunks are pulled, re-ranked and passed to the generative model, which decides which sources to cite.
👉 Where are CWV in this pipeline? They’re absent from any published ranking features list. Most LLM engines don’t execute page-render metrics because:
Headless scrapers often bypass real browser rendering.
Content is evaluated at token level; latency to display pixels is irrelevant.
However, CWV can still influence earlier and later stages indirectly.

3. Six Indirect Ways CWV Affect Your LLM Visibility
Crawl Budget Efficiency
Slow TTFB or heavy JS can trigger timeouts. If bots fail to retrieve your main HTML within a set window (often < 10 s), that page is skipped or only partially captured.
Render Blocking Prevents Text Extraction
Content hidden behind JS frameworks or late-loading hydration may never reach the raw HTML the bot stores. A good LCP often correlates with server-side rendered or progressive HTML—exactly what scrapers like.
Canonical SERP Signals Still Flow Downstream
Google’s AI Overview pulls candidate URLs from the web index first. Poor CWV can stagnate your organic rankings, lowering chances of being sampled for the answer layer.
User Trust & Citation Bias
Perplexity and Claude include tiny site previews. Slow or unstable layouts diminish perceived authority, reducing manual click-through rates (CTR). Engines learn from these engagement signals to refine source selection.
Edge-Caching & “Instant View” Initiatives
Several AI browsers now cache “instant view” snapshots (think AMP 2.0). Pages that meet CWV thresholds are easier to bundle into these lightweight replicas, improving shareability and citation likelihood.
Developer Attention = Better Technical Hygiene
Teams that optimize CWV often also deliver clean HTML, semantic headings and structured data—factors directly correlated with higher citation probability, per OpenAI’s August 2025 Source Preference Paper.
Real-World Data Point
BlogSEO analyzed 4,212 URLs cited at least once by ChatGPT in July 2025 and compared their public PageSpeed Insights scores to a random web sample of 50k URLs:
Sample | % URLs Passing All CWV |
Cited by ChatGPT | 48.9 % |
Random Web Sample | 31.2 % |
Correlation ≠ causation, but it suggests high-performing pages surface more often—likely due to the indirect reasons above.
4. CWV vs. LLMO: Complementary, Not Competing
If you’ve read our guide "LLMO Explained", you know the four LLMO pillars are:
Entity clarity
Context-window engineering
Verifiable facts & citations
Machine accessibility
CWV touches machine accessibility in two ways:
Server Responsiveness – Fast initial responses (< 1 s) keep crawlers within budget.
Minimal Client-Side Rendering – Low CLS and quick INP often signify HTML-first delivery that guarantees textual availability without JavaScript execution.
Hence, optimizing CWV is a prerequisite for reliable LLM ingestion, even if it doesn’t move the ranking needle on its own.
5. Practical Optimization Checklist for the Dual SERP + LLM Era
Use this condensed playbook to align CWV work with LLM-readiness goals:
Audit TTFB and server performance.
Aim for < 100 ms server-response globally using edge-deployments or CDN caching.
Serve semantic, crawl-friendly HTML by default.
Hydrate interactive widgets progressively.
Defer non-essential scripts; inline critical CSS.
Compress images and adopt next-gen formats (AVIF, WebP).
Set explicit width/height to avoid CLS on media and ads.
Pre-render structured data (JSON-LD) server-side so it’s available to bots.
Implement Last-Modified and strong ETag headers; LLM crawlers respect freshness cues.
Use /llms.txt and Markdown mirrors (see our "How to Make Content Easily Crawlable by LLMs") to advertise lightweight versions.
Monitor CWV in Search Console and crawler logs for fetch failures.
Layer your CWV sprints into broader LLMO roadmaps—BlogSEO’s workspace can schedule both content generation and technical tasks side-by-side.
6. When CWV Can Be Safely De-Prioritized
There are legitimate scenarios where pixel-perfect CWV scores deliver diminishing returns:
Gated SaaS dashboards not intended for indexing or citation.
Static documentation sites already loading in < 1 s globally.
Resource-constrained teams choosing between LLM-friendly structured data and shaving the last 0.2 s off LCP.
For everyone else, treat CWV optimizations as table stakes, not a moonshot.
Frequently Asked Questions
Do LLMs execute JavaScript to measure INP or CLS? Typically no. Most answer-engine crawlers fetch the raw HTML and, at most, resolve inline CSS. They rarely run full JS, so they don’t compute CLS like a real browser.
If CWV aren’t ranking factors for LLMs, should I ignore them? Ignoring CWV jeopardizes crawl success, canonical SERP rankings, and user engagement—all of which indirectly influence your chance of being cited.
Does Google’s AI Overview use CWV in its selection algorithm? Google hasn’t confirmed this. It likely inherits candidates from the main index, where CWV can affect ranking in tie-break situations.
My SPA fails CWV tests but gets cited by Perplexity—why? If Perplexity crawled a server-rendered fallback (e.g., an "index.raw.html"), it may still capture your content. That doesn’t guarantee future crawls will succeed. Provide SSR or static snapshots to be safe.
Ready to Future-Proof Your Site for Both Humans and LLMs?
BlogSEO not only automates AI-optimized article generation and internal linking, it also flags technical blockers—like slow TTFB or missing structured data—that sabotage LLM visibility.
Schedule a 14-day free sandbox and see how our Website Structure Analysis and Auto-Publishing workflow can improve Core Web Vitals and citation share without extra engineering hours. Get started today and turn every article into a lightning-fast, LLM-ready asset.