The name is misleading. The technology is not.
Let's start by clearing something up: serverless computing does not mean there are no servers.
There are absolutely servers. You just don't see them, manage them, patch them, scale them, or think about them. The cloud provider handles all of that — and you pay only for the compute time your code actually uses.
That's the deal. And for a growing number of use cases, it's a deal worth taking.
But serverless isn't magic, and it isn't always the right choice. Like every architectural decision in software development, it has meaningful trade-offs. This post breaks down what serverless actually is, how it works under the hood, where it genuinely shines, and where it doesn't.
What Is Serverless Architecture?
Serverless is a cloud execution model where:
- You write code (typically in small, discrete units called functions)
- You deploy that code to a cloud provider
- The provider runs your code in response to events
- You are billed only for the milliseconds your code actually ran
Major platforms:
- AWS Lambda
- Google Cloud Functions
- Azure Functions
- Cloudflare Workers
Most implementations follow Functions as a Service (FaaS): your app is composed of stateless functions that each handle a specific task.
How Does It Actually Work?
When a serverless function is triggered:
- The provider spins up an execution environment
- Your function code runs
- The result is returned
- The environment may stay warm briefly or shut down
Cold starts — when a function hasn't run recently — introduce extra latency and are a key nuance of serverless.
Where Serverless Genuinely Shines
Event-driven workloads
Processing uploads, sending emails on registration, reacting to database changes.
Unpredictable or spiky traffic
Scales automatically without paying for idle capacity.
APIs and microservices with low-to-moderate traffic
Each endpoint maps to a function. You pay for usage.
Scheduled jobs and automation
Replace cron jobs with serverless functions.
Rapid prototyping
No infrastructure overhead. Write, deploy, test.
Where Serverless Struggles
Long-running processes
Execution time limits (e.g., 15 minutes on Lambda) make some workloads a poor fit.
Cold start latency
First request after idle can be slow. Mitigations exist but add cost/complexity.
Statelessness as a constraint
Requires external state (databases, caches) for persistence.
Vendor lock-in
Deep integration with a specific provider's ecosystem.
Observability and debugging
Distributed functions require deliberate logging, tracing, and monitoring.
Serverless vs. Traditional Servers vs. Containers — The Honest Comparison
- Traditional servers: full control, great for long-running and complex apps.
- Containers: great for microservices and portability.
- Serverless: great for event-driven, spiky workloads with minimal ops overhead.
Serverless is not all-or-nothing. Many systems use a hybrid approach.
When Should You Choose Serverless?
Choose serverless when:
- Workloads are event-driven and discrete
- Traffic is unpredictable
- You want minimal operational overhead
- You're building lightweight APIs, automation, or background tasks
- Cost efficiency at low-to-moderate scale matters
Think twice when:
- Processes run long
- You need consistently low latency
- Vendor independence is critical
- Your team lacks experience with distributed systems observability
La arquitectura serverless se está convirtiendo en una herramienta que todo desarrollador moderno debería entender, aunque no siempre la use. Saber cuándo aplicarla — y cuándo no — es parte de ser un buen arquitecto de software.