What You Will Learn
- Brotli vs gzip — compression ratios, browser support, and when to use each
- HTTP/2 multiplexing and how it changes asset loading strategies
- Resource hints — preload, prefetch, preconnect, dns-prefetch — use cases for each
- How to identify and eliminate render-blocking resources in the critical rendering path
- A complete image delivery pipeline: format selection, sizing, lazy loading, CDN
- Cache-Control headers — the complete directive reference
- How to audit and manage third-party script performance impact
Compression: Brotli and Gzip
Text-based resources — HTML, CSS, JavaScript, SVG, JSON — compress very efficiently. Enabling server-side compression reduces transfer sizes by 60–80% for typical web assets, directly reducing resource load duration (Phase 3 of LCP) for text resources.
Brotli (developed by Google) achieves 15–25% better compression ratios than gzip at equivalent speeds, and up to 20–26% better than gzip at maximum compression levels. Supported by all modern browsers (Chrome, Firefox, Safari, Edge). Browsers signal Brotli support via the Accept-Encoding: br request header.
# Nginx — enable Brotli (requires ngx_brotli module)
brotli on;
brotli_comp_level 6;
brotli_types text/html text/css application/javascript
application/json image/svg+xml;
# Apache — enable Brotli (mod_brotli)
AddOutputFilterByType BROTLI_COMPRESS text/html text/css
AddOutputFilterByType BROTLI_COMPRESS application/javascript
Most CDNs (Cloudflare, Fastly, AWS CloudFront) support Brotli — enable it in your CDN settings rather than at the origin server level for maximum coverage.
Gzip is universally supported and should be the fallback for clients that do not support Brotli. In practice, fewer than 1% of browsers lack Brotli support, but gzip is still the correct server configuration default to ensure coverage. Serve Brotli when Accept-Encoding: br is present; fall back to gzip otherwise — this is handled automatically by Nginx, Apache, and most CDNs when both are configured.
HTTP/2 and HTTP/3
HTTP/2 fundamentally changed how browsers load multiple resources simultaneously. Under HTTP/1.1, browsers were limited to 6 parallel connections per domain — a primary reason for performance techniques like domain sharding, CSS sprites, and JavaScript concatenation. HTTP/2 multiplexing allows many requests over a single connection, eliminating head-of-line blocking for most scenarios.
HTTP/2 requires HTTPS. Most modern web servers (Nginx 1.9.5+, Apache 2.4.17+) and all major CDNs support HTTP/2 — verify it is enabled. Check using Chrome DevTools Network panel: right-click the column headers and enable "Protocol" — requests served via HTTP/2 show "h2".
HTTP/3 (QUIC protocol) offers further improvements — particularly for users on high-latency or lossy connections — because it eliminates TCP head-of-line blocking. Cloudflare, Google Cloud CDN, and AWS CloudFront support HTTP/3. Enable it at the CDN level if available; do not rely on it from origin.
Under HTTP/1.1, serving assets from multiple domains (static1.example.com, static2.example.com) increased parallel connections. Under HTTP/2, this is counterproductive — it creates separate TCP connections with separate congestion windows and eliminates multiplexing benefits. Consolidate assets onto a single domain (or the minimum necessary) when using HTTP/2.
Resource Hints
Resource hints are HTML link elements that tell the browser to perform work (DNS lookups, connections, downloads) in advance of when it would normally discover the need. Used correctly, they eliminate latency from the critical path. Used incorrectly, they waste bandwidth and can hurt performance.
| Hint | What It Does | When to Use | Cost |
|---|---|---|---|
preload | Fetches a specific resource immediately at high priority | LCP image, critical fonts, above-fold CSS used via @import | Medium — fetches unconditionally; waste if resource not used |
preconnect | Establishes TCP+TLS connection to a third-party origin in advance | Google Fonts, CDN origins, analytics hosts loaded in <head> | Low — no data transferred, just connection overhead |
dns-prefetch | Resolves DNS for an origin in advance (no connection) | Third-party origins loaded later in the page (below fold) | Very low — DNS lookup only |
prefetch | Fetches a resource for likely future navigation at low priority | Next-page resources for known user flows (checkout step 2) | Medium — fetches in background; bandwidth cost if not used |
modulepreload | Fetches and parses a JavaScript module in advance | Critical JS modules needed immediately on page load | Medium — parses and compiles the module, not just downloads |
preload is designed for 1–2 critical resources per page. Every preloaded resource competes with the LCP resource for bandwidth during the critical load window. Preloading 10 resources simultaneously defeats the purpose. Only preload the LCP image and perhaps one critical font. Use preconnect for third-party origins and dns-prefetch for origins that load below the fold.
The Critical Rendering Path
The critical rendering path is the sequence of steps the browser must complete before it can paint any pixel on screen: parse HTML, build the DOM; parse CSS, build the CSSOM; combine into the render tree; calculate layout; paint. Any resource that blocks one of these steps directly delays the first paint and LCP.
Render-blocking CSS
All CSS stylesheets in <head> are render-blocking. The browser cannot paint until all CSS is downloaded and parsed. Solutions: inline critical CSS (styles needed for above-fold rendering) in a <style> tag; load non-critical CSS asynchronously.
<!-- Inline critical CSS -->
<style>/* above-fold styles **/</style>
<!-- Defer non-critical CSS -->
<link rel="stylesheet" href="/styles.css"
media="print" onload="this.media='all'">
Render-blocking JavaScript
Script elements without async or defer attributes in <head> block both HTML parsing and rendering until they download and execute. Add defer to all non-critical scripts. Use async only for truly independent scripts with no DOM dependency. Move analytics and tag manager scripts to the end of <body> as a fallback if refactoring is not feasible.
Image Delivery Pipeline
- Format: WebP for broad support (25–35% smaller than JPEG), AVIF for maximum compression where supported (40–50% smaller). Use
<picture>with fallback. - Sizing: Serve images at the dimensions they will actually display. A hero image displayed at 800px should not be served at 3000px. Use
srcsetandsizesfor responsive images. - Compression: For JPEG and WebP, quality 75–85 is typically indistinguishable from higher quality at half the file size. Use Squoosh, ImageMagick, or a CDN image transformation service.
- Lazy loading: Add
loading="lazy"to all below-fold images. Never add it to the LCP image or above-fold images. - Dimensions: Always declare
widthandheightto prevent CLS. - CDN delivery: Serve images from a CDN. For dynamic sizing, use a CDN image optimisation service (Cloudinary, Imgix, Cloudflare Images) that transforms images on-the-fly and serves from edge.
Cache-Control Headers
| Directive | Meaning | Use For |
|---|---|---|
public | Response can be cached by CDN, proxy, and browser | Static assets, cacheable HTML pages |
private | Browser can cache but CDN/proxy cannot | Personalised pages (account dashboard, cart) |
no-store | Never cache under any circumstances | Sensitive data, real-time dashboards |
no-cache | Cache but must revalidate with server before serving | Pages that must always be fresh but can use ETags |
max-age=N | Cache is fresh for N seconds | Static assets (long TTL with hash-based filenames) |
s-maxage=N | CDN/proxy cache duration (overrides max-age for CDNs) | HTML pages where CDN and browser TTLs should differ |
stale-while-revalidate=N | Serve stale cache while revalidating in background | HTML pages — ensures fast response while keeping content fresh |
immutable | Content will never change — skip revalidation for max-age period | Fingerprinted static assets (CSS, JS with hash in filename) |
Third-Party Script Management
Third-party scripts are the most common cause of preventable page speed regressions. A page that scores 95 on Lighthouse with first-party code only may score 55 after adding a tag manager, chat widget, A/B testing tool, and ad network. The key principle: load third-party scripts as late as possible and isolate them from the critical rendering path.
- Audit regularly. Run Chrome DevTools Coverage tab and Network tab filtered to third-party domains. Identify scripts that are unused or that run before user interaction.
- Use Google Tag Manager triggers. Fire tags on user interactions (first click, first scroll, form focus) rather than page load. This defers the execution of analytics, heatmaps, and A/B testing scripts until after critical rendering.
- Facade patterns for embeds. Replace YouTube iframes, chat widgets, and map embeds with static placeholders loaded on click. Saves 200KB–1MB of JavaScript per embed.
- Self-host analytics. Running lightweight first-party analytics (Plausible, Fathom) eliminates the DNS lookup, connection, and execution overhead of Google Analytics while preserving data ownership.
Page Speed Audit Tools
| Tool | Best For | Data Type |
|---|---|---|
| Google PageSpeed Insights | First-pass audit combining field data (CrUX) with Lighthouse diagnostics | Field + Lab |
| Chrome DevTools Lighthouse | Local audit with full diagnostic recommendations; CI/CD integration | Lab |
| WebPageTest | Waterfall analysis, geographic testing, filmstrip view, HTTP header inspection | Lab |
| Chrome DevTools Network | Request-by-request analysis — protocol, size, timing, initiator | Lab |
| Chrome DevTools Coverage | Identifies unused CSS and JavaScript on the current page | Lab |
| Google Search Console CWV | Real user CWV data stratified by page type; identifies worst-performing page groups | Field (CrUX) |
Authentic Sources
Comprehensive guide to page speed including resource hints, compression, and caching.
Resource integrity verification for CDN-hosted assets.