Migrate from Heroku to AWS: A Step-by-Step Guide
Heroku’s pricing changes pushed many teams to AWS. Here’s how to make the move.
Why Migrate?
Common reasons teams leave Heroku:
- Cost: Heroku’s new pricing ($5-7/dyno) adds up quickly
- Control: Need VPC, private networking, or specific AWS services
- Scale: Hitting Heroku’s limits on database size or dyno count
- Compliance: Requirements for data residency or specific certifications
The Migration Path
Most Heroku apps map cleanly to AWS services:
| Heroku | AWS Equivalent |
|---|---|
| Web Dyno | Fargate / App Runner |
| Worker Dyno | Fargate / Lambda |
| Heroku Postgres | RDS PostgreSQL |
| Heroku Redis | ElastiCache Redis |
| Heroku Scheduler | EventBridge + Lambda |
| Config Vars | Systems Manager Parameter Store |
| Heroku Pipelines | CodePipeline / GitHub Actions |
Step 1: Containerize Your App
If your app isn’t already containerized, start here.
Node.js example
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
USER node
EXPOSE 8080
CMD ["node", "dist/index.js"]
Rails example
FROM ruby:3.2-slim AS builder
WORKDIR /app
RUN apt-get update && apt-get install -y build-essential libpq-dev
COPY Gemfile Gemfile.lock ./
RUN bundle install --deployment --without development test
FROM ruby:3.2-slim
WORKDIR /app
RUN apt-get update && apt-get install -y libpq5 && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/vendor ./vendor
COPY . .
USER nobody
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Test locally
docker build -t myapp .
docker run -p 8080:8080 -e DATABASE_URL=postgres://... myapp
If it works locally, it’ll work on AWS.
Step 2: Set Up the Database
Option A: AWS Database Migration Service (DMS)
Best for large databases or minimal downtime requirements.
const replicationInstance = new dms.CfnReplicationInstance(this, "DmsInstance", {
replicationInstanceClass: "dms.t3.medium",
allocatedStorage: 50,
vpcSecurityGroupIds: [securityGroup.securityGroupId],
replicationSubnetGroupIdentifier: subnetGroup.ref,
});
const migrationTask = new dms.CfnReplicationTask(this, "MigrationTask", {
migrationType: "full-load-and-cdc", // Full copy + ongoing changes
replicationInstanceArn: replicationInstance.ref,
sourceEndpointArn: herokuEndpoint.ref,
targetEndpointArn: rdsEndpoint.ref,
tableMappings: JSON.stringify({
rules: [{ "rule-type": "selection", "rule-action": "include", "object-locator": { "schema-name": "public", "table-name": "%" }}]
}),
});
Option B: pg_dump/pg_restore
Simpler for smaller databases. Requires brief downtime.
# Export from Heroku
heroku pg:backups:capture --app your-app
heroku pg:backups:download --app your-app
# Or direct dump
pg_dump $HEROKU_DATABASE_URL -Fc > backup.dump
# Import to RDS
pg_restore -h your-rds-host.rds.amazonaws.com \
-U postgres -d yourdb backup.dump
RDS configuration
const database = new rds.DatabaseInstance(this, "Database", {
engine: rds.DatabaseInstanceEngine.postgres({
version: rds.PostgresEngineVersion.VER_15,
}),
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T4G, ec2.InstanceSize.MICRO),
vpc,
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
allocatedStorage: 20,
maxAllocatedStorage: 100, // Auto-scaling
backupRetention: Duration.days(7),
deletionProtection: true,
credentials: rds.Credentials.fromGeneratedSecret("postgres"),
});
Step 3: Deploy to Fargate
Basic Fargate setup
const cluster = new ecs.Cluster(this, "Cluster", { vpc });
const taskDefinition = new ecs.FargateTaskDefinition(this, "TaskDef", {
memoryLimitMiB: 512,
cpu: 256,
runtimePlatform: {
cpuArchitecture: ecs.CpuArchitecture.ARM64,
operatingSystemFamily: ecs.OperatingSystemFamily.LINUX,
},
});
taskDefinition.addContainer("App", {
image: ecs.ContainerImage.fromEcrRepository(repository, "latest"),
portMappings: [{ containerPort: 8080 }],
environment: {
NODE_ENV: "production",
PORT: "8080",
},
secrets: {
DATABASE_URL: ecs.Secret.fromSecretsManager(database.secret!),
},
logging: ecs.LogDrivers.awsLogs({
streamPrefix: "app",
logRetention: logs.RetentionDays.ONE_MONTH,
}),
});
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "Service", {
cluster,
taskDefinition,
desiredCount: 2,
publicLoadBalancer: true,
});
Health checks
Heroku has automatic health checks. Configure them explicitly for AWS:
service.targetGroup.configureHealthCheck({
path: "/health",
healthyHttpCodes: "200",
interval: Duration.seconds(30),
timeout: Duration.seconds(5),
healthyThresholdCount: 2,
unhealthyThresholdCount: 3,
});
Add a health endpoint to your app:
app.get("/health", (req, res) => {
res.status(200).json({ status: "ok" });
});
Step 4: Environment Variables
Export from Heroku
heroku config --app your-app --shell > .env.heroku
Store in Parameter Store
const databaseUrl = new ssm.StringParameter(this, "DatabaseUrl", {
parameterName: "/myapp/prod/DATABASE_URL",
stringValue: "postgres://...", // Or reference the RDS secret
tier: ssm.ParameterTier.STANDARD,
});
// Reference in task definition
secrets: {
DATABASE_URL: ecs.Secret.fromSsmParameter(databaseUrl),
}
For sensitive values, use Secrets Manager:
const apiKey = new secretsmanager.Secret(this, "ApiKey", {
secretName: "/myapp/prod/api-key",
});
secrets: {
API_KEY: ecs.Secret.fromSecretsManager(apiKey),
}
Step 5: CI/CD Pipeline
Replace Heroku Pipelines with GitHub Actions:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: myapp
ECS_CLUSTER: myapp-cluster
ECS_SERVICE: myapp-service
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/GitHubActionsRole
aws-region: ${{ env.AWS_REGION }}
- name: Login to ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Deploy to ECS
run: |
aws ecs update-service \
--cluster $ECS_CLUSTER \
--service $ECS_SERVICE \
--force-new-deployment
Step 6: DNS Cutover
Zero-downtime approach
- Keep Heroku running during migration
- Point a test subdomain to AWS (e.g.,
aws.yourapp.com) - Test thoroughly
- Update DNS to point main domain to AWS
- Monitor for issues
- Decommission Heroku after DNS propagation (24-48 hours)
// Route 53 record pointing to ALB
new route53.ARecord(this, "AppRecord", {
zone: hostedZone,
recordName: "app",
target: route53.RecordTarget.fromAlias(
new targets.LoadBalancerTarget(service.loadBalancer)
),
});
Common Pitfalls
1. Forgetting about workers
Heroku worker dynos need separate Fargate services:
const workerTask = new ecs.FargateTaskDefinition(this, "WorkerTask", {
memoryLimitMiB: 512,
cpu: 256,
});
workerTask.addContainer("Worker", {
image: ecs.ContainerImage.fromEcrRepository(repository, "latest"),
command: ["node", "dist/worker.js"], // Different entrypoint
// ... same secrets/environment as web
});
new ecs.FargateService(this, "WorkerService", {
cluster,
taskDefinition: workerTask,
desiredCount: 1,
});
2. Heroku-specific environment variables
Some Heroku variables don’t exist on AWS:
| Heroku | AWS Alternative |
|---|---|
PORT | Set explicitly (usually 8080) |
DYNO | ECS_CONTAINER_METADATA_URI |
DATABASE_URL | Construct from RDS secret |
REDIS_URL | Construct from ElastiCache endpoint |
3. Scheduled tasks
Replace Heroku Scheduler with EventBridge:
const scheduledTask = new ecs_patterns.ScheduledFargateTask(this, "DailyTask", {
cluster,
scheduledFargateTaskDefinition: {
taskDefinition: workerTask,
},
schedule: appscaling.Schedule.cron({ hour: "6", minute: "0" }),
});
4. Logs
Heroku has heroku logs --tail. On AWS, use CloudWatch:
# Install AWS CLI logs plugin
pip install awslogs
# Tail logs
awslogs get /ecs/myapp --watch
Or use the AWS Console’s CloudWatch Logs Insights.
5. Review apps
Heroku Review Apps need manual replacement. Options:
- GitHub Environments with manual approval
- AWS App Runner for automatic preview deployments
- Custom solution with ephemeral Fargate tasks per PR
Cost Comparison
For a typical small app (1 web dyno, 1 worker, basic Postgres):
| Component | Heroku | AWS |
|---|---|---|
| Web | $7/mo (Basic) or $25/mo (Standard) | ~$9/mo (Fargate Spot) |
| Worker | $7-25/mo | ~$9/mo (Fargate Spot) |
| Database | $9/mo (Mini) | ~$12/mo (db.t4g.micro) |
| Total | $23-59/mo | ~$30/mo |
AWS gets cheaper at scale due to reserved capacity and Spot pricing.
Migration Checklist
□ Containerize application
□ Test container locally
□ Set up VPC and networking
□ Create RDS instance
□ Migrate database
□ Deploy to Fargate
□ Configure environment variables
□ Set up CI/CD pipeline
□ Configure health checks
□ Test thoroughly on subdomain
□ Cut over DNS
□ Monitor for 48 hours
□ Decommission Heroku
Lessons Learned
-
Don’t rush the database migration. Test thoroughly. Data loss is unrecoverable.
-
Keep Heroku running until you’re confident. The overlap cost is worth it.
-
Container size matters. Smaller images = faster deployments = lower costs.
-
Networking is different. VPCs, security groups, and subnets take time to understand.
-
Monitoring requires setup. Heroku’s built-in metrics don’t exist on AWS. Build dashboards early.
Migrating from Heroku to AWS is work upfront, but the control and cost savings are worth it for growing applications.