INP Replaced FID. Here Is What Most Teams Got Wrong During the Transition.
Digital Engineering
Interaction to Next Paint became a Core Web Vital in March 2024, replacing First Input Delay. The teams that thought 'small change' got hit. The metric measures something quite different and the optimisations are different.
By Arjun Raghavan, Security & Systems Lead, BIPI · January 18, 2026 · 6 min read
Google replaced First Input Delay with Interaction to Next Paint in March 2024. The teams that read the announcement, saw 'similar metric, slightly different measurement' and assumed their FID-tuned sites would be fine got an unpleasant surprise on the next Search Console report. INP scores were significantly worse than FID had been, and many sites that previously passed Core Web Vitals now failed.
INP and FID measure different things. FID measured the delay before the browser started processing the first input. INP measures the full latency from any interaction to the next visual update, across the whole session. Optimisations for one are not optimisations for the other.
What actually changed
- FID measured only the first input. INP measures all interactions and reports the worst (or 98th percentile for sessions with many interactions).
- FID measured input delay only. INP measures input delay plus processing time plus presentation delay.
- FID was a one-shot metric, easy to score well by deferring all but the most important early script. INP is continuous, so deferred work that runs on later interactions still counts.
- FID could be passed by aggressive code splitting at first paint. INP requires every interaction throughout the session to be fast.
What teams got wrong
We watched several common mistakes during the 2024 transition:
- Assuming FID-good = INP-good. Sites with great FID but slow click handlers (large React component trees, expensive event handlers) showed up as INP-poor.
- Lighthouse-only measurement. Lighthouse simulates one interaction in a controlled environment. Field data (CrUX) measures real users with real devices and real interactions. The gap is large.
- Optimising for desktop. Field data is dominated by mobile. INP issues are usually mobile-only because lower-end devices struggle with the same handler that desktop runs in 5ms.
- Trusting React 18 'concurrent' features to fix it. React 18 helps with rendering, but synchronous handler logic (state updates, computed values) still blocks the main thread.
INP makes you accountable for every interaction in the session, not just the first one. Sites that hid expensive work behind 'they will not click that often' fail.
What actually moves INP
- Break up long tasks. Any handler that takes more than 50ms blocks subsequent input. Split work with scheduler.yield() (Chrome 129+) or setTimeout(() => ...) or yieldToMain pattern.
- Use requestIdleCallback for non-urgent work after interaction. Show the visual response first, do tracking and analytics second.
- Move heavy computation off the main thread. Web Workers for parsing, sorting, filtering large datasets.
- Avoid synchronous reflow in handlers. Reading layout properties (offsetWidth, getBoundingClientRect) after writing forces a sync layout.
- Defer third-party scripts that listen to clicks (analytics, A/B testing, chat widgets). Their handlers run before yours and add to interaction latency.
- For React: avoid setState calls in synchronous handlers when possible; use startTransition for non-urgent updates so the visual response can paint first.
Measurement that matters
Lab tools like Lighthouse, WebPageTest, and DevTools Performance panel are useful for diagnosing, not for measuring. Production INP requires real-user monitoring (RUM): the web-vitals.js library reports the actual INP your users experience, broken down by device, page, and interaction. Without RUM, you are guessing.
We instrument every client deploy with web-vitals.js reporting to a small ingest endpoint. The dashboard shows INP P75 by route and device class. INP regressions show up in 2-3 days of new deploy data, allowing tighter feedback loops than Search Console (which takes 28 days to update).
What good INP looks like
Google's threshold: INP under 200ms is 'good,' 200-500ms is 'needs improvement,' over 500ms is 'poor.' The thresholds are aggressive. Sites with significant interactivity (forms, filters, dynamic content) routinely sit in needs-improvement territory until specifically tuned.
Achievable targets: P75 INP under 200ms across mobile is realistic for well-tuned content sites; under 100ms for static / news sites; 200-300ms for SaaS dashboards and editors with significant interactivity.
Closing
INP is a stricter metric than FID and the optimisations are different. The teams who treated it as a renaming exercise lost their Core Web Vitals badge. The teams who treated it as a new metric, instrumented production traffic, and broke up long tasks across all interactions ended up with faster sites overall. The metric is doing what it was designed to do: surfacing latency that real users experience but tooling previously hid.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.