

Modern software is built on towers of abstractions, each one making development “easier” while adding overhead:
Today’s real chain: React → Electron → Chromium → Docker → Kubernetes → VM → managed DB → API gateways. Each layer adds “only 20–30%.” Compound a handful and you’re at 2–6× overhead for the same behavior.
That’s how a Calculator ends up leaking 32GB. Not because someone wanted it to—but because nobody noticed the cumulative cost until users started complaining.
Man, this is so true. What sucks even more is that so many devs now don’t know how to build anything BUT this stack.
Maybe I have been lucky, but I’ve never seen a company (including “cloud-native” ones) use serverless (code compute) like this. Lambdas only ever get used for tiny, atomic functions in my experience. Never heard of or seen someone try to do a 20-minute video conversion in lambda.
Any tool misused is a handicap, and I feel like the architecture presented here isn’t making it clear that serverless code compute was ever right for this task.
For instance, if video processing is being done on a constant or consistent, non-ad-hoc basis, why is serverless being used at all? Who made that decision? If it’s infrequent enough that it doesn’t make cost sense to run a whole dedicated instance, why not have a Fargate node (container) or something that encapsulates the “background worker” portion of the process? That would let you have an equivalent “simple” flow as in the original process diag, but without the limitations of doing everything in Lambda.
Lambda is great for things like building SOAR flows, e.g. disabling network access to a compromised instance, taking a snapshot, pulling logs, etc. Infrequent, able to combine cloud infra and host-internal actions, and fast. That’s a perfect use-case for Lambdas.