Hacker News

10 hours ago by simonw

This article touches on "Request Coalescing" which is a super important concept - I've also seen this called "dog-pile prevention" in the past.

Varnish has this built in - good to see it's easy to configure with NGINX too.

One of my favourite caching proxy tricks is to run a cache with a very short timeout, but with dog-pile prevention baked in.

This can be amazing for protecting against sudden unexpected traffic spikes. Even a cache timeout of 5 seconds will provide robust protection against tens of thousands of hits per second, because request coalescing/dog-pile prevention will ensure that your CDN host only sends a request to the origin a maximum of once ever five seconds.

I've used this on high traffic sites and seen it robustly absorb any amount of unauthenticated (hence no variety on a per-cookie basis) traffic.

5 hours ago by sciurus

I'll echo what Simon said; we share some experiences here. There's a potential footgun, though, anyone getting started with this should know about-

Request coalescing can be incredibly beneficial for cacheable content, but for uncacheable content you need to turn it off! Otherwise you'll cause your cache server to serialize requests to your backend for it. Let's imagine a piece of uncacheable content takes one second for your backend to generate. What happens if your users request it at a rate of twice a second? Those requests are going to start piling up, breaking page loads for your users while your backend servers sit idle.

If you are using Varnish, the hit-for-miss concept addresses this. However, it's easy to implement wrong when you start writing your own VCL. Be sure to read https://info.varnish-software.com/blog/hit-for-miss-and-why-... and related posts. My general answer to getting your VCL correct is writing tests, but this is a tricky behavior to validate.

I'm unsure how nginx's caching handles this, which would make me nervous using the proxy_cache_lock directive for locations with a mix of cacheable and uncacheable content.

3 minutes ago by Akronymus

Speaking of non-cacheable data:

https://arstechnica.com/gaming/2015/12/valve-explains-ddos-i...

Caching is HARD.

8 minutes ago by endymi0n

And to add the last big one from the trifecta:

Know how to deal with cacheable data. Know how to deal with uncacheable data. But by all means, know how to keep them apart.

Accidentally caching uncacheable data has lead so some of the most ugly and avoidable data leaks and compromises in recent times.

If you go down the "route everything through a CDN route (that can be as easy as ticking a box in the Google Cloud Platform backend), make extra sure to flag authenticated data as cache-control: private / no-cache.

8 hours ago by sleepy_keita

Back when I was just getting started, we were doing a lot of WordPress stuff. A client contacted us, "oh yeah, later today we're probably going to have 1000x the traffic because of a popular promotion". I had no idea what to do so I thought, I'll just set the varnish cache to 1 second, that way WordPress will only get a maximum of 60 requests per second. It worked pretty much flawlessly, and taught me a lot about the importance of request coalescing and how caches work.

6 hours ago by mnutt

In varnish, if you have some requirements flexibility you can enable grace mode in order to serve stale responses but update from the origin, and avoid long requests every [5] seconds.

Not quite the same layer, but in node.js I’m a fan of the memoize(fn)->promise pattern where you wrap a promise-returning function to return the _same_ promise for any callers passing the same arguments. It’s a fairly simple caching mechanism that coalesces requests and the promise resolves/rejects for all callers at once.

7 hours ago by cortesoft

"Thundering herd" problem is how I have always heard it called.

7 hours ago by thaumasiotes

The thundering herd problem isn't really about high levels of traffic. To the extent that that's a problem, it's just an ordinary DOS.

The thundering herd problem specifically refers to what happens if you coordinate things so that all your incoming requests occur simultaneously. Imagine that over the course of a week, you tell everyone who needs something from you "I'm busy right now; please come back next Tuesday at 11:28 am". You'll be overwhelmed on Tuesday at 11:28 am regardless of whether your average weekly workload is high or low, because you concentrated your entire weekly workload into the same one minute. You solve the thundering herd problem by not giving out the same retry time to everyone who contacts you while you're busy.

6 hours ago by 8note

Hmm. I think of thundering Herd being about retries.

All your failing requests batch up when your retry strategy sucks, then you end up really high traffic on every retry, and very little in between

3 hours ago by eyelidlessness

Thundering herd is about mitigating a problem with backpressure scenarios. If you have a backoff and a delayed queue of requests, letting them all proceed at once when the backpressure scenario resolves is likely to recreate it/create a new one. Staggering them so they proceed slightly off in time avoids that.

9 hours ago by jabo

Love the level of detail that Fly's articles usually go into.

We have a distributed CDN-like feature in the hosted version of our open source search engine [1] - we call it our "Search Delivery Network". It works on the same principles, with the added nuance of also needing to replicate data over high-latency networks between data centers as far apart as Sao Paulo and Mumbai for eg. Brings with it another fun set of challenges to deal with! Hoping to write about it when bandwidth allows.

[1] https://cloud.typesense.org

5 hours ago by mrkurt

I'd love to read about it.

13 minutes ago by 3np

As someone who’s mostly clueless about BGP but have a fair grasp of all the other layers mentioned, I’d love to see posts like this going more in depth on it for folks like myself.

11 hours ago by amirhirsch

This is cool and informative and Kurt's writing is great:

The briny deeps are filled with undersea cables, crying out constantly to nearby ships: "drive through me"! Land isn't much better, as the old networkers shanty goes: "backhoe, backhoe, digging deep — make the backbone go to sleep".

7 hours ago by tptacek

We can't take credit for the backhoe thing; that really is an old networking shanty.

3 hours ago by daniel_iversen

Years ago I was involved with some high performance delivery of a bunch of newspapers, and we used Squid[1] quite well. One nice thing you could do as well (but it's probably a bit hacky and old school these days) was to "open up" only parts of the web page to be dynamic while the rest was cached (or have different cache rules for different page components)[2]. With some legacy apps (like some CMS') this can hugely improve performance while not sacrificing the dynamic and "fresh looking" parts of the website.

[1] http://www.squid-cache.org/ [2] https://en.wikipedia.org/wiki/Edge_Side_Includes

11 hours ago by babelfish

fly.io has a fantastic engineering blog. Has anyone used them as a customer (enterprise or otherwise) and have any thoughts?

3 minutes ago by corobo

I read their blogs and I visit their site every new project I start but it just hasn't clicked with me yet.

Tinkering has been great but the addon style pricing scares the jeebs out of me (my wallet), I just assume I can't afford it for now and spin up a DO droplet. The droplet is probably more expensive for my use case but call it ADHD tax haha, at least it's capped

10 hours ago by joshuakelly

Yes, I'm using it. I deploy a TypeScript project that runs in a pretty straightforward node Dockerfile. The build just works - and it's smart too. If I don't have a Docker daemon locally, it creates a remote one and does some WireGuard magic. We don't have customers on this yet, but I'm actively sending demos and rely on it.

Hopefully I'll get to keep working on projects that can make use of it because it feels like a polished 2021 version of Heroku era dev experience to me. Also, full disclosure, Kurt tried to get me to use it in YC W20 - but I didn't listen really until over a year later.

9 hours ago by mike_d

I run my own worldwide anycast network and still end up deploying stuff to Fly because it is so much easier.

The folks who actually run the network for them are super clueful and basically the best in the industry.

7 hours ago by jbarham

One of my side projects is a DNS hosting service, SlickDNS (https://www.slickdns.com/).

I moved my authoritative DNS name servers over to Fly a few months ago. After some initial teething issues with Fly's UDP support (which were quickly resolved) it's been smooth sailing.

The Fly UX via the flyctl command-line app is excellent, very Heroku-like. Only downside is it makes me mad when I have to fight the horrendous AWS tooling in my day job.

9 hours ago by Rd6n6

Sounds like a fun weekend project

Daily digest email

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.