Cloud

Serverless vs Containers: Choosing the Right Compute Model

Compare serverless (Lambda, Cloud Run) and containers (ECS, EKS, Fargate) on cold starts, pricing, scaling, vendor lock-in, and local development. Learn when to use each compute model.

A
Abhishek Patel8 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Serverless vs Containers: Choosing the Right Compute Model
Serverless vs Containers: Choosing the Right Compute Model

Two Models, Different Trade-Offs

The serverless vs containers debate is one of the most common architectural decisions in cloud computing, and it's often framed as an either-or choice. It's not. Serverless functions and containers solve different problems, and most production systems use both. Lambda handles event-driven glue work while ECS or Kubernetes runs the core application. The real question isn't which is "better" -- it's which fits each part of your workload.

I've built systems on both sides. I've seen teams force everything into Lambda and hit execution limits within months. I've also seen teams containerize a simple webhook handler that would've been three lines in a Lambda function. This guide breaks down the actual trade-offs so you can make informed decisions instead of following trends.

What Is Serverless Computing?

Definition: Serverless computing is an execution model where the cloud provider manages server provisioning, scaling, and maintenance. You deploy functions or applications, and the provider runs them on demand, scaling to zero when idle and charging only for actual execution time.

Serverless doesn't mean "no servers." It means you don't manage them. The major serverless compute platforms:

  • AWS Lambda -- functions triggered by events (API Gateway, S3, SQS, etc.)
  • Google Cloud Run -- containers that scale to zero (serverless containers)
  • Azure Container Apps -- similar to Cloud Run, with Dapr integration
  • AWS Fargate -- runs containers without managing EC2 instances (serverless container hosting)

What Are Containers?

Containers package your application and its dependencies into an isolated, portable unit. You build a Docker image, push it to a registry, and run it on any container runtime. You're responsible for choosing instance sizes, configuring scaling, and managing the underlying infrastructure (unless you use Fargate or Cloud Run).

Head-to-Head Comparison

FactorServerless (Lambda)Containers (ECS/EKS)
Cold starts100ms-10s depending on runtime and sizeNone (always running)
Execution limit15 minutes (Lambda)No limit
Memory limit10 GB (Lambda)Limited by instance size
ScalingInstant, automatic, to thousands of concurrent executionsMinutes (new tasks/pods need to start)
Scale to zeroYes (no cost when idle)No (minimum tasks running unless using Fargate Spot or Karpenter)
Local developmentEmulators (SAM, serverless-offline) -- imperfect parityDocker Compose -- near-perfect parity
Vendor lock-inHigh (event triggers, IAM, service integrations)Low (Docker images run anywhere)
DebuggingCloudWatch Logs, X-Ray -- limitedFull access to container logs, shell, metrics
NetworkingVPC optional, adds cold start latencyFull VPC control

Where Serverless Wins

Event-Driven Processing

Lambda excels at reacting to events: an image uploaded to S3, a message arriving in SQS, a row inserted in DynamoDB, or an API Gateway request. Each event triggers a function invocation independently. You don't need to manage polling, concurrency, or idle resources.

// Lambda handler: resize image on S3 upload
import { S3Event } from 'aws-lambda';
import sharp from 'sharp';

export const handler = async (event: S3Event) => {
  const bucket = event.Records[0].s3.bucket.name;
  const key = event.Records[0].s3.object.key;

  const image = await s3.getObject({ Bucket: bucket, Key: key });
  const resized = await sharp(image.Body as Buffer)
    .resize(800, 600)
    .toBuffer();

  await s3.putObject({
    Bucket: bucket,
    Key: `thumbnails/${key}`,
    Body: resized,
  });
};

Low-Traffic APIs and Microservices

If your API handles fewer than 100,000 requests per month, Lambda + API Gateway costs essentially nothing. A container running 24/7 to handle a few requests per hour is pure waste.

Scheduled Tasks and Cron Jobs

EventBridge triggers a Lambda function on a schedule. No need to keep a container running just to execute a job every 6 hours.

Where Containers Win

Long-Running Processes

WebSocket servers, streaming applications, background workers that process continuously -- anything that runs for more than 15 minutes can't use Lambda. Containers have no execution time limit.

High-Throughput APIs

At scale, containers are cheaper per request than Lambda. A c6g.large Fargate task running 24/7 costs about $60/month and handles thousands of requests per second. The equivalent Lambda usage at sustained high traffic would cost significantly more.

Complex Applications

Applications with heavy dependencies, large binaries, or complex startup sequences work better in containers. A 500 MB Lambda deployment package hits the size limit and suffers terrible cold starts. A container has no such constraint.

Local Development Fidelity

Docker Compose gives you near-perfect parity between local and production. Lambda emulators (SAM local, serverless-offline) are approximations that miss edge cases in IAM, event source mapping, and cold start behavior.

The Middle Ground: Serverless Containers

Fargate, Cloud Run, and Azure Container Apps blur the line. You get container packaging with serverless scaling:

ServiceProviderScale to ZeroPricing
AWS FargateAWSNo (minimum 1 task)Per vCPU-second + per GB-second
Google Cloud RunGCPYesPer request + per vCPU-second + per GB-second
Azure Container AppsAzureYesPer vCPU-second + per GB-second

Cloud Run is the closest thing to "Lambda for containers." It scales to zero, supports any language or binary, handles HTTP and gRPC, and has no execution time limit. If I were starting a new project on GCP, Cloud Run would be my default compute.

Pro tip: Fargate Spot gives you the cost benefits of spot instances for containers without managing EC2. It's 70% cheaper than regular Fargate and works well for fault-tolerant workloads like batch processing and CI runners.

Pricing Comparison: Real Workloads

Low-Traffic API (10,000 requests/day)

OptionMonthly Cost
Lambda + API Gateway~$3
Fargate (1 task, 0.25 vCPU)~$9
ECS on EC2 (t3.micro)~$8

High-Traffic API (10 million requests/day)

OptionMonthly Cost
Lambda + API Gateway~$450
Fargate (4 tasks, 1 vCPU each)~$120
ECS on EC2 (c6g.xlarge x2)~$100

At low traffic, serverless is cheaper because you pay nothing when idle. At high sustained traffic, containers win because you're paying for capacity rather than per-invocation.

A Practical Decision Framework

  1. Event-driven, sporadic, or low-traffic? Use Lambda.
  2. Sustained high traffic? Use containers (Fargate or ECS/EKS on EC2).
  3. Runs longer than 15 minutes? Use containers.
  4. Needs scale-to-zero with container flexibility? Use Cloud Run or Azure Container Apps.
  5. Complex local dev workflow? Use containers with Docker Compose.
  6. Glue code between AWS services? Use Lambda -- it's purpose-built for this.

Frequently Asked Questions

What is a cold start and how bad is it really?

A cold start happens when Lambda spins up a new execution environment for your function. For Node.js and Python, cold starts are typically 100-500ms. For Java and .NET, they can reach 2-10 seconds. Provisioned Concurrency eliminates cold starts by keeping environments warm, but it costs money. For most APIs, occasional 200ms cold starts are imperceptible to users.

Can I run a database inside a container?

You can, but you probably shouldn't in production. Containers are ephemeral by design -- when a container restarts, local data is lost unless you mount persistent storage. Managed database services (RDS, Cloud SQL, Azure Database) handle replication, backups, and failover far better than a self-managed database in a container.

Is vendor lock-in with Lambda a real concern?

It depends on your architecture. The Lambda function code itself is portable -- it's just a function. The lock-in is in the event sources (S3 triggers, SQS, DynamoDB streams), IAM permissions, and service integrations. If your business logic is cleanly separated from the Lambda handler, migrating to another platform is a wrapper change, not a rewrite.

Should I use ECS or EKS for containers on AWS?

Use ECS if you're AWS-only and want simplicity. ECS is fully managed, integrates deeply with AWS services, and requires no cluster management overhead. Use EKS if you need Kubernetes compatibility (multi-cloud portability, Kubernetes ecosystem tooling) or your team already knows Kubernetes. EKS adds operational complexity that isn't justified unless you specifically need Kubernetes features.

Can I combine Lambda and containers in the same application?

Absolutely, and most mature applications do. A common pattern: containers handle the core API (long-running, high-throughput), Lambda processes asynchronous events (S3 uploads, SQS messages, scheduled tasks), and Step Functions orchestrate complex workflows that span both. Use the right tool for each job rather than forcing one model everywhere.

What about Lambda container images?

Lambda supports container images up to 10 GB, letting you use the same Docker build process for both Lambda and container deployments. This is useful for functions with large dependencies (ML models, heavy libraries). Cold starts are longer with large images, but it eliminates the 250 MB zip deployment limit and lets you use familiar Docker tooling.

Use Both, Thoughtfully

The best architectures combine serverless and containers based on each workload's characteristics. Don't adopt one model exclusively. Use Lambda for event-driven and sporadic workloads where scale-to-zero saves money. Use containers for high-throughput services, long-running processes, and complex applications where local development fidelity matters. The compute model should follow the workload, not the other way around.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.