AWS Lambda: When It Makes Sense (And When It Doesn't)
Lambda isn’t always the answer. But when it is, it’s a great one.
When Lambda Makes Sense
AWS Lambda excels in specific scenarios:
1. Event-driven workloads
- Processing uploads to S3
- Responding to database changes (DynamoDB Streams)
- Handling webhooks
- Queue processing (SQS, EventBridge)
2. Unpredictable or spiky traffic
- APIs with variable load
- Batch jobs that run occasionally
- Development/staging environments
3. Microservices with clear boundaries
- Single-purpose functions
- Independent scaling requirements
- Different runtime needs per service
4. Cost-sensitive low-traffic applications
- Side projects
- Internal tools
- MVPs and prototypes
When Lambda Doesn’t Make Sense
Lambda has real limitations:
| Scenario | Problem | Better Alternative |
|---|---|---|
| Long-running processes (>15 min) | Hard timeout limit | Fargate, Step Functions |
| Consistent high traffic | Cold starts + cost | Fargate, ECS |
| WebSocket connections | Stateless execution | Fargate, App Runner |
| Heavy compute workloads | Memory/CPU limits | EC2, Fargate |
| Large dependencies (>250MB) | Package size limits | Container-based Lambda or Fargate |
The crossover point: At roughly 1 million requests per month with consistent traffic, Fargate often becomes cheaper than Lambda. Run the numbers for your specific workload.
Architecture Patterns That Work
Pattern 1: API Gateway + Lambda
The classic serverless API:
const api = new apigateway.RestApi(this, "Api", {
restApiName: "MyService",
deployOptions: {
stageName: "prod",
throttlingRateLimit: 1000,
throttlingBurstLimit: 500,
},
});
const handler = new lambda.Function(this, "Handler", {
runtime: lambda.Runtime.NODEJS_20_X,
handler: "index.handler",
code: lambda.Code.fromAsset("lambda"),
memorySize: 256,
timeout: Duration.seconds(10),
environment: {
TABLE_NAME: table.tableName,
},
});
api.root.addResource("items").addMethod("GET", new apigateway.LambdaIntegration(handler));
Best for: CRUD APIs, webhooks, low-to-medium traffic services.
Pattern 2: Event Processing Pipeline
S3 uploads trigger processing:
const processImage = new lambda.Function(this, "ProcessImage", {
runtime: lambda.Runtime.NODEJS_20_X,
handler: "process.handler",
code: lambda.Code.fromAsset("lambda"),
memorySize: 1024, // Image processing needs memory
timeout: Duration.minutes(5),
});
bucket.addEventNotification(
s3.EventType.OBJECT_CREATED,
new s3n.LambdaDestination(processImage),
{ prefix: "uploads/", suffix: ".jpg" }
);
Best for: File processing, ETL pipelines, async workflows.
Pattern 3: Scheduled Tasks
Cron jobs without servers:
const dailyReport = new lambda.Function(this, "DailyReport", {
runtime: lambda.Runtime.PYTHON_3_12,
handler: "report.handler",
code: lambda.Code.fromAsset("lambda/reports"),
timeout: Duration.minutes(15),
});
new events.Rule(this, "DailySchedule", {
schedule: events.Schedule.cron({ hour: "6", minute: "0" }),
targets: [new targets.LambdaFunction(dailyReport)],
});
Best for: Reports, cleanup jobs, data synchronization.
Cold Start Optimization
Cold starts are Lambda’s biggest pain point. Here’s how to minimize them:
1. Right-size memory allocation
More memory = more CPU = faster cold starts. Test different configurations:
const handler = new lambda.Function(this, "Handler", {
memorySize: 512, // Often faster than 256MB despite higher cost
// ...
});
2. Use Provisioned Concurrency for critical paths
const version = handler.currentVersion;
new lambda.Alias(this, "ProdAlias", {
aliasName: "prod",
version,
provisionedConcurrentExecutions: 5, // 5 warm instances always ready
});
Cost consideration: Provisioned concurrency has a fixed cost. Only use it for latency-critical paths.
3. Keep dependencies minimal
// Bad: 50MB node_modules
import AWS from "aws-sdk";
// Good: 3MB with targeted imports
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
4. Initialize outside the handler
// Good: Initialized once per container
const dynamodb = new DynamoDBClient({});
export async function handler(event: APIGatewayEvent) {
// Handler reuses the client
return dynamodb.send(new GetCommand({ /* ... */ }));
}
Cost Optimization
Lambda pricing has three components:
- Requests: $0.20 per million
- Duration: $0.0000166667 per GB-second
- Data transfer: Standard AWS rates
Optimization strategies
Right-size memory:
| Memory | Duration | Cost per 1M invocations |
|---|---|---|
| 128MB | 200ms | $0.42 |
| 256MB | 120ms | $0.50 |
| 512MB | 80ms | $0.67 |
| 1024MB | 50ms | $0.83 |
Sometimes higher memory is cheaper because execution is faster.
Use ARM64 (Graviton2):
const handler = new lambda.Function(this, "Handler", {
architecture: lambda.Architecture.ARM_64,
// 20% cheaper, often faster
});
Batch operations:
// Bad: One Lambda invocation per item
items.forEach(item => processItem(item));
// Good: Process multiple items per invocation
const batchSize = 100;
for (let i = 0; i < items.length; i += batchSize) {
processBatch(items.slice(i, i + batchSize));
}
Common Mistakes
1. Putting everything in one Lambda
// Bad: Monolithic Lambda
export async function handler(event) {
switch (event.path) {
case "/users": return handleUsers(event);
case "/orders": return handleOrders(event);
case "/products": return handleProducts(event);
// 50 more routes...
}
}
This defeats the purpose of serverless. Each route has different scaling needs.
2. Ignoring timeout configuration
// Bad: Default 3-second timeout
const handler = new lambda.Function(this, "Handler", {
// timeout defaults to 3 seconds
});
// Good: Explicit timeout based on actual needs
const handler = new lambda.Function(this, "Handler", {
timeout: Duration.seconds(30),
});
3. Not handling retries
Lambda retries failed async invocations. Your code must be idempotent:
export async function handler(event) {
const requestId = event.requestContext?.requestId;
// Check if already processed
const existing = await getProcessedRequest(requestId);
if (existing) {
return existing.result;
}
// Process and store result
const result = await processEvent(event);
await storeProcessedRequest(requestId, result);
return result;
}
4. Synchronous calls to other Lambdas
// Bad: Lambda calling Lambda synchronously
const result = await lambda.invoke({
FunctionName: "OtherFunction",
InvocationType: "RequestResponse",
Payload: JSON.stringify(data),
}).promise();
// Good: Use async invocation or queues
await sqs.sendMessage({
QueueUrl: queueUrl,
MessageBody: JSON.stringify(data),
}).promise();
Monitoring and Debugging
Essential CloudWatch metrics
- Invocations: Total calls
- Duration: Execution time
- Errors: Failed executions
- Throttles: Rate limit hits
- ConcurrentExecutions: Parallel instances
Structured logging
export async function handler(event) {
console.log(JSON.stringify({
level: "INFO",
message: "Processing request",
requestId: event.requestContext?.requestId,
path: event.path,
}));
// ... handler logic
}
Use CloudWatch Logs Insights to query:
fields @timestamp, @message
| filter level = "ERROR"
| sort @timestamp desc
| limit 100
Decision Framework
Use this to decide if Lambda fits your use case:
Is your workload event-driven?
└── Yes → Lambda is a good fit
└── No → Consider Fargate/ECS
Is traffic unpredictable or spiky?
└── Yes → Lambda handles this well
└── No → Calculate cost vs. Fargate
Do you need sub-second response times?
└── Yes → Use Provisioned Concurrency or Fargate
└── No → Standard Lambda is fine
Are executions under 15 minutes?
└── Yes → Lambda works
└── No → Use Step Functions or Fargate
Lessons Learned
-
Lambda isn’t always cheaper. Run the numbers for your traffic pattern.
-
Cold starts matter for user-facing APIs. Plan for them or use Provisioned Concurrency.
-
Keep functions focused. One purpose per Lambda.
-
Idempotency is required. Retries will happen.
-
Monitor from day one. CloudWatch dashboards catch problems early.
Lambda is a powerful tool when applied to the right problems. The key is understanding where it excels and where other options make more sense.