BIPI
BIPI

Edge Computing vs CDN in 2026: When to Push Compute and When to Stay at the Origin

Digital Engineering

Edge functions and CDNs blur in 2026. Vercel, Cloudflare Workers, Fastly Compute@Edge all run code at the edge. When is that worth the complexity, and when is plain CDN still the right answer?

By Arjun Raghavan, Security & Systems Lead, BIPI · July 28, 2024 · 7 min read

#edge#cdn#performance

The line between CDN and edge compute is gone in 2026. Cloudflare Workers run V8 isolates at every PoP. Fastly Compute@Edge runs WASM. Vercel Edge Functions run on Cloudflare's network. The CDN you used to think of as 'caches HTML and images' now runs your authentication logic, your A/B test assignment, and your personalization rules.

The question stopped being 'is edge worth it' and started being 'what should run at the edge and what should not.' We have shipped edge code on three client engagements and migrated one client back to the origin after the architecture stopped paying off. Here is the honest decision framework.

What actually wins at the edge

The pattern across our client work: edge wins when latency is dominated by the round trip to origin and the computation is small. Push the small computation to the edge, eliminate the round trip, win. Specifically:

  • Authentication: validating a JWT and rejecting unauthorized requests at the edge saves 200ms+ of origin travel for every rejected request
  • A/B test assignment: deciding which variant a user sees, then routing to the correct cached page
  • Personalization based on geolocation, device type, or cookie-readable signals
  • Bot detection and rate limiting: reject hostile traffic before it reaches origin
  • Image transformation: resize, crop, format-shift on demand without origin involvement
  • Header manipulation: security headers, A/B test headers, custom routing headers

The common factor: stateless or near-stateless computation that depends on data already available at the edge (request headers, the user's KV-stored profile, geolocation). When the computation needs to reach back to origin or to a database, the latency win disappears.

Where CDN-only is still the right answer

If your workload is 'serve cached HTML and assets to anonymous users,' you do not need edge compute. You need a CDN with good cache behavior. Adding edge functions for the sake of having them is how you turn a 99.99 percent uptime CDN into a 99.9 percent uptime application.

We see this often: a marketing site, mostly static, gets edge functions added for personalization that affects 3 percent of users. The other 97 percent now route through edge code that adds 10ms and one more failure point. The pragmatic answer for marketing sites is plain CDN with cache-control headers and the personalization done client-side or at origin for the small minority that needs it.

Cold start realities

Cloudflare Workers start in under 5ms (V8 isolates, the same primitive Chrome uses for tabs). Fastly Compute@Edge starts in under 1ms (WASM). Vercel Edge Functions inherit Cloudflare's cold start. AWS Lambda@Edge is much slower, often 100ms+ on cold start, which is why it is rarely the right tool for latency-sensitive edge work.

The number that matters is not the cold start in milliseconds, it is the percentage of requests that pay it. For a hot route on Cloudflare Workers, cold start affects under 0.1 percent of requests. For a rarely-hit route, it might be 5 percent. The latency tail is real but small.

The state problem

Edge runtimes have limited state stories. Cloudflare KV is eventually consistent (60+ seconds). Durable Objects give you strong consistency at one PoP but at the cost of routing all traffic for that key to that PoP. Fastly has Edge KV. Vercel has Edge Config (small, read-heavy).

What this means in practice: read-heavy state is fine at the edge. Write-heavy state is not. If your edge function needs to update a database on every request, you have built distributed RPC with extra steps and you are paying the round-trip latency anyway, plus the edge complexity.

Operational considerations

Things that bite teams new to edge:

  1. Debugging is harder. Logs are distributed across PoPs, request tracing must propagate explicitly, and reproducing 'works for me' bugs is genuinely difficult when the bug only appears in Singapore
  2. Deployment is global and fast, which means rollback windows are short. A bad edge deploy hits all users in 30 seconds
  3. Per-request CPU limits are real. A function that loops or does heavy crypto can hit the limit and fail. Test under load, not just functionally
  4. Bundle size limits exist (1MB for Workers, larger for some others). Tree-shaking matters, dependencies matter
  5. Pricing models are different from origin compute. Per-request pricing means a request flood is a cost flood; budget alerts matter

The migration we walked back

One client moved their entire authentication and authorization logic to Cloudflare Workers. The performance was great. The operational story was painful: every auth bug required reading distributed Worker logs, the testing story was thin, and the team did not have anyone whose primary expertise was edge runtimes. After 8 months, we moved auth back to origin behind a CDN. The 200ms they lost was less than the operational cost of running an edge service their team could not effectively debug.

The decision rule we apply now: push to the edge what genuinely benefits from it (the bullet list above) and keep at origin what does not. Do not move to the edge for architectural reasons. Move because the latency math says so. The teams that succeed at edge computing have one or two well-chosen edge components, not a full architecture.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.