← Clarigital·Clarity in Digital Marketing
Technical SEO · Session 2, Guide 7

JavaScript SEO · How Google Crawls and Indexes JavaScript

JavaScript-heavy sites present unique crawling and indexing challenges. Googlebot renders JavaScript but does so in a second wave that can delay indexing by days or weeks. This guide explains Googlebot's rendering pipeline, common JavaScript SEO problems, dynamic rendering as a solution, and how to diagnose JS crawlability with Search Console and Chrome DevTools.

Technical SEO2,900 wordsUpdated Apr 2026

What You Will Learn

  • Googlebot's two-wave rendering pipeline and why it causes indexing delays
  • Why client-side rendered SPAs (React, Vue, Angular) are particularly problematic for SEO
  • How dynamic rendering works and when Google recommends it
  • Server-side rendering as the permanent solution for JS SEO problems
  • How to test what Googlebot actually sees using URL Inspection in Search Console
  • Common JavaScript patterns that block crawling and how to fix them
  • How to verify structured data in JavaScript-rendered pages is being indexed

How Googlebot Renders JavaScript

Googlebot crawls web pages using a headless Chromium browser. It can execute JavaScript, fetch resources, and render the DOM — much like a regular user's browser. However, Googlebot's rendering pipeline differs from a real browser in several important ways that affect which content gets indexed.

Googlebot's Chromium version trails the current stable release. As of 2026, Google uses a version that supports the majority of modern JavaScript syntax (ES2020+) and most Web APIs, but some newer browser APIs are not available. JavaScript that relies on very recent features may fail silently in Googlebot's renderer.

Googlebot does not execute JavaScript that requires user interaction

Content that is only revealed after a user click, hover, scroll, or form submission will not be indexed. Googlebot simulates a page load but does not simulate user interactions. Any content behind a JavaScript event handler that requires user action is invisible to Googlebot unless that content also exists in the initial HTML or in an accessible URL.

The Two-Wave Indexing Problem

Google's rendering pipeline for JavaScript operates in two phases, known as the two-wave approach. Wave 1 is fast — Googlebot fetches the HTML and indexes the static content immediately. Wave 2 is deferred — the page is added to a rendering queue where JavaScript is executed and the DOM is evaluated. The content discovered in Wave 2 is only indexed after rendering completes.

The delay between Wave 1 and Wave 2 rendering is variable and unpredictable. Google has disclosed that this delay can range from seconds to days or even weeks during periods of high rendering queue load. This means:

  • New content on a JavaScript-rendered page may take days to appear in Google's index after publication
  • Updates to JS-rendered content (product descriptions, prices, page titles) take longer to propagate to search results
  • Critical SEO elements — title tags, meta descriptions, canonical URLs, hreflang attributes, and structured data — generated by JavaScript may be absent from the index for extended periods

The two-wave problem applies even if Googlebot can render JS

A common misconception is that "Googlebot can execute JavaScript, so JS SEO is not a problem." The two-wave delay exists regardless of whether Googlebot can successfully render your JavaScript — it is a function of rendering queue capacity, not rendering capability. Server-rendered HTML bypasses the queue entirely.

Single-Page Application SEO Problems

Single-page applications (React, Vue, Angular, Svelte with CSR) route page navigation through JavaScript rather than server-side URL changes. Each "page" in a SPA is rendered by JavaScript manipulating the DOM and updating the browser URL via the History API. This architecture creates several specific SEO problems:

  • Blank initial HTML. A typical create-react-app or Vue CLI SPA delivers HTML containing only a <div id="app"></div> placeholder. All content is generated by JavaScript. If Wave 2 rendering fails or is delayed, there is no content to index.
  • Soft navigation title and meta issues. When a user navigates between SPA routes, the URL changes but no full page reload occurs. If the JavaScript router does not update document.title and meta tags correctly on each route change, all "pages" in the SPA may share the same title and description in Google's index.
  • Infinite scroll and paginated content. Content loaded by JavaScript on scroll (infinite scroll pagination) is typically not crawled — Googlebot does not simulate scrolling. Use classic pagination with distinct URLs for paginated content that needs to be indexed.
  • JavaScript-generated canonical and hreflang tags. These must be present in the initial HTML to be reliably indexed. If they are injected by JavaScript after load, they may be missed during Wave 1 and subject to Wave 2 delays.

Dynamic Rendering

Dynamic rendering is a server-side workaround: detect when the incoming request is from a crawler (based on the User-Agent header) and serve pre-rendered static HTML; serve the normal JavaScript SPA to regular users. Crawlers receive fast, fully-rendered HTML with no JavaScript dependency. Users receive the full interactive SPA experience.

Google considers dynamic rendering a temporary workaround

Google's official documentation describes dynamic rendering as an interim solution for sites where implementing server-side rendering is not immediately feasible. Google recommends SSR as the permanent approach. Dynamic rendering is acceptable and effective — but it adds infrastructure complexity (a pre-rendering service like Rendertron or Puppeteer) and requires maintenance as your content changes.

How to implement dynamic rendering

  1. Deploy a headless browser service (Rendertron, Puppeteer-based, or Prerender.io) that pre-renders your SPA URLs on demand and caches the rendered HTML
  2. Configure your reverse proxy (Nginx, CDN edge function) to detect crawler User-Agents and route those requests to the pre-rendering service
  3. Configure cache invalidation so the pre-rendered HTML is refreshed when content changes
  4. Verify the correct HTML is being served to Googlebot using the URL Inspection tool in Search Console

Server-Side Rendering — The Permanent Solution

Server-side rendering (SSR) generates the full HTML of each page on the server before sending it to the browser. The browser receives complete, indexable HTML — eliminating the two-wave problem entirely. JavaScript then "hydrates" the HTML to add interactivity. From Google's perspective, SSR pages are indistinguishable from traditional server-rendered HTML pages.

FrameworkSSR SolutionNotes
ReactNext.js (App Router or Pages Router)Industry standard; supports SSR, SSG, ISR, and streaming
VueNuxt.jsFull-featured SSR framework for Vue; supports static generation
AngularAngular UniversalOfficial Angular SSR solution; integrated into Angular CLI
SvelteSvelteKitDefault is SSR; highly performant with minimal JS payload
Any frameworkAstroIslands architecture; ships zero JS by default; excellent for content sites

Testing JavaScript SEO

URL Inspection Tool (Search Console)

The URL Inspection tool shows exactly what Googlebot sees when it renders a specific URL — including the rendered HTML after JavaScript execution. Enter a URL, click "Test Live URL", then "View Tested Page" and switch to the "HTML" tab to see the rendered source. This is the single most reliable way to verify what content Googlebot is actually indexing on a JavaScript-rendered page.

Chrome DevTools — Disable JavaScript

Open DevTools → Settings (F1) → Preferences → Debugger → enable "Disable JavaScript". Reload the page. What you see is approximately what Googlebot's Wave 1 indexes (before JavaScript rendering). If the page is blank or missing key content, those elements depend entirely on JavaScript rendering for indexing.

Crawl tools with JavaScript rendering

Screaming Frog and Sitebulb support JavaScript rendering in their crawlers. Compare a crawl with JavaScript rendering disabled vs enabled to identify URLs where content differs significantly — these are your highest-priority JavaScript SEO issues.

Common JavaScript SEO Fixes

  • Move critical meta tags to the initial HTML. Title, meta description, canonical, hreflang, and Open Graph tags must be in the server-rendered HTML — not injected by JavaScript.
  • Ensure structured data is in the initial HTML. JSON-LD schema can be generated server-side and included in the <head> of the initial HTML response. Do not rely on JavaScript to inject it after load.
  • Avoid hash-based routing. URLs like example.com/page#section are treated as the same URL by Google (the fragment is ignored). Use the HTML5 History API (pushState) for clean URLs instead.
  • Return correct HTTP status codes. A 404 page rendered by JavaScript but served with a 200 HTTP status code is a "soft 404" — Google may index it as valid content. Server-side routing must return the correct status code for each URL.
  • Implement XML sitemap for all JS-rendered URLs. Help Googlebot discover all indexable URLs rather than relying solely on link discovery through the JavaScript application.

Authentic Sources

OfficialGoogle Search Central — JavaScript SEO Basics

How Googlebot handles JavaScript, rendering, and indexing JS-rendered content.

OfficialGoogle Search Central — Dynamic Rendering

When and how to implement dynamic rendering for JavaScript-heavy sites.

OfficialGoogle web.dev — Rendering on the Web

SSR, CSR, SSG, hydration — the full rendering pattern taxonomy.

600 guides. All authentic sources.

Official documentation and academic research only.