Deploy to AWS with Docker

Containerize your application with Docker and deploy it to AWS using ECR and ECS.

🟑 Intermediate ⏱️ 3 hours Cloud Computing / DevOps

Learn how to containerize a web application with Docker and deploy it to AWS using ECR and ECS. A step-by-step intermediate tutorial following real-world deployment practices.

What You're Building

You'll take a working web application, containerize it with Docker, push the container image to Amazon Elastic Container Registry (ECR), and deploy it to AWS using Amazon Elastic Container Service (ECS) with Fargate.

By the end, you'll have:

  • A Docker container running your application
  • The container image stored in AWS ECR
  • The application running live on AWS, accessible via a public URL
  • A solid understanding of the containerized deployment workflow used in professional environments

Before You Start

What you need installed:

  • Docker Desktop β€” free at docker.com/products/docker-desktop. Install it and confirm it's running (you should see the Docker whale icon in your taskbar/menu bar)
  • AWS CLI β€” AWS's command line tool. Install at docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
  • A text editor β€” VS Code recommended

What you need:

  • An AWS account β€” if you don't have one, sign up at aws.amazon.com. The services used in this tutorial fall within the AWS Free Tier for new accounts
  • Basic comfort with the command line β€” navigating directories, running commands

Verify Docker is working: Open your terminal and run:

docker --version

You should see something like Docker version 24.x.x. If you see an error, make sure Docker Desktop is running.

Verify AWS CLI is installed:

aws --version

You should see aws-cli/2.x.x. If not, revisit the installation instructions.

Time commitment: 3 hours. Some steps involve waiting for AWS to provision resources β€” those moments are noted.

Step 1: Configure AWS CLI

What we're doing: Connecting the AWS CLI on your computer to your AWS account so you can interact with AWS services from the terminal.

1.1 β€” Create an IAM User

Never use your AWS root account for day-to-day work. We'll create an IAM user with programmatic access.

In the AWS Console:

  1. Go to IAM (search "IAM" in the services search bar)
  2. Click Users β†’ Create user
  3. Username: docker-deploy-user
  4. Select Attach policies directly
  5. Attach: AmazonEC2ContainerRegistryFullAccess and AmazonECS_FullAccess
  6. Complete the user creation
  7. Click the user β†’ Security credentials tab β†’ Create access key
  8. Choose CLI as the use case
  9. Download or copy the Access Key ID and Secret Access Key β€” you'll need them in the next step and won't be able to retrieve the secret key again

1.2 β€” Configure the CLI

In your terminal:

aws configure

You'll be prompted for:

AWS Access Key ID: [paste your access key]
AWS Secret Access Key: [paste your secret key]
Default region name: us-east-1
Default output format: json

1.3 β€” Verify the connection:

aws sts get-caller-identity

You should see a JSON response with your account ID and the IAM user ARN. If you see an error about credentials, double-check that you pasted the keys correctly.

βœ… Checkpoint: The aws sts get-caller-identity command returns a JSON response with your account information β€” no error.

Troubleshooting: If you see Unable to locate credentials, run aws configure again and carefully re-enter your access key and secret key. Make sure there are no extra spaces.

Step 2: Create the Application

What we're doing: Creating a simple Node.js web application to containerize and deploy. If you already have an application you want to deploy, you can adapt these steps to your project.

2.1 β€” Create the project folder

mkdir docker-aws-app
cd docker-aws-app

2.2 β€” Initialize a Node.js project

npm init -y

This creates a package.json file with default settings.

2.3 β€” Install Express

npm install express

Express is a minimal Node.js web framework β€” perfect for this tutorial.

2.4 β€” Create the application file

Create a file called app.js and add:

const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.send(`
    <!DOCTYPE html>
    <html>
      <head>
        <title>Docker + AWS Deployment</title>
        <style>
          body {
            font-family: system-ui, sans-serif;
            background: #0f0f1a;
            color: #e0e0e0;
            display: flex;
            justify-content: center;
            align-items: center;
            min-height: 100vh;
            margin: 0;
          }
          .container {
            text-align: center;
            padding: 2rem;
          }
          h1 { color: #6c63ff; font-size: 2.5rem; }
          p { color: #9a9ab0; margin-top: 1rem; }
          .badge {
            display: inline-block;
            background: #1a1a2e;
            border: 1px solid #6c63ff;
            padding: 0.5rem 1.2rem;
            border-radius: 8px;
            margin-top: 1.5rem;
            font-size: 0.9rem;
          }
        </style>
      </head>
      <body>
        <div class="container">
          <h1>πŸš€ Deployed with Docker & AWS</h1>
          <p>This application is running inside a Docker container on AWS ECS.</p>
          <div class="badge">Container: Running βœ“ | Cloud: AWS βœ“</div>
        </div>
      </body>
    </html>
  `);
});
app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); });

2.5 β€” Test locally

node app.js

Open your browser and go to http://localhost:3000. You should see the styled page. When it looks good, stop the server with Ctrl+C.

βœ… Checkpoint: The application runs locally at http://localhost:3000 and displays the styled page correctly.

Step 3: Write the Dockerfile

What we're doing: Creating the Dockerfile β€” the instructions Docker uses to build a container image of your application.

Create a file called Dockerfile (no extension) in your project folder and add:

# Use the official Node.js 20 LTS image as the base
FROM node:20-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy dependency files first (Docker layer caching optimization)
COPY package*.json ./

# Install dependencies
RUN npm install --production

# Copy the rest of the application code
COPY . .

# Expose port 3000 so it's accessible outside the container
EXPOSE 3000

# The command that runs when the container starts
CMD ["node", "app.js"]

Create a .dockerignore file to exclude files that shouldn't be in the container:

node_modules
npm-debug.log
.git
.gitignore
README.md

What the Dockerfile does β€” explained line by line:

FROM node:20-alpine β€” Every Docker image starts from a base image. We're using the official Node.js 20 image built on Alpine Linux β€” a minimal Linux distribution that keeps the image small and fast. Alpine-based images are typically 5-10x smaller than their full Linux counterparts.

WORKDIR /app β€” Sets the working directory inside the container. All subsequent commands run from /app. If it doesn't exist, Docker creates it.

COPY package*.json ./ β€” Copies package.json and package-lock.json before copying the rest of the code. This is a layer caching optimization β€” Docker caches each layer. If your code changes but your dependencies don't, Docker reuses the cached npm install layer instead of reinstalling everything. This makes rebuilds dramatically faster.

RUN npm install --production β€” Installs only production dependencies β€” development dependencies aren't needed in the container.

COPY . . β€” Copies all remaining files from your project into the container.

EXPOSE 3000 β€” Documents which port the container listens on. It doesn't actually open the port β€” that happens when you run the container β€” but it's important metadata.

CMD ["node", "app.js"] β€” The command that runs when the container starts. Uses array syntax (exec form) rather than string syntax (shell form) β€” exec form doesn't spawn a shell, which makes the process handle signals correctly.

Step 4: Build and Test the Container

What we're doing: Building the Docker image locally and confirming the container runs correctly before pushing it to AWS.

4.1 β€” Build the image

docker build -t docker-aws-app .

-t docker-aws-app names (tags) the image. . tells Docker to use the current directory as the build context.

You'll see Docker executing each instruction in your Dockerfile. The first build takes a minute or two as it pulls the base image and installs dependencies. Subsequent builds are faster because of layer caching.

4.2 β€” Verify the image was created

docker images

You should see docker-aws-app in the list with a recent creation date.

4.3 β€” Run the container locally

docker run -p 3000:3000 docker-aws-app

-p 3000:3000 maps port 3000 on your computer to port 3000 inside the container. Open http://localhost:3000 in your browser.

The page should look exactly the same as when you ran it with node app.js β€” but this time, it's running inside a container. The environment inside the container is completely isolated from your machine.

Stop the container with Ctrl+C.

βœ… Checkpoint: docker images shows your docker-aws-app image. The container runs successfully and the page loads at http://localhost:3000.

Troubleshooting: If docker build fails with a permissions error, make sure Docker Desktop is running. If the page doesn't load, confirm the -p 3000:3000 flag is in your run command.

Step 5: Push to Amazon ECR

What we're doing: Creating a private container registry in AWS and pushing your Docker image to it so AWS services can pull and run it.

5.1 β€” Create an ECR repository

aws ecr create-repository \
  --repository-name docker-aws-app \
  --region us-east-1

The response includes a repositoryUri β€” copy it. It looks like:
123456789.dkr.ecr.us-east-1.amazonaws.com/docker-aws-app

Set it as a variable for convenience (replace with your actual URI):

export ECR_URI=123456789.dkr.ecr.us-east-1.amazonaws.com/docker-aws-app

5.2 β€” Authenticate Docker with ECR

aws ecr get-login-password --region us-east-1 | \
  docker login --username AWS --password-stdin \
  123456789.dkr.ecr.us-east-1.amazonaws.com

Replace 123456789.dkr.ecr.us-east-1.amazonaws.com with your account's ECR domain (everything before /docker-aws-app in your repository URI). You should see Login Succeeded.

5.3 β€” Tag the image for ECR

docker tag docker-aws-app:latest $ECR_URI:latest

5.4 β€” Push the image

docker push $ECR_URI:latest

Docker uploads each layer to ECR. This takes a minute or two depending on your internet speed. When complete, your container image is stored in AWS and ready to be deployed.

Verify in the AWS Console: Go to ECR β†’ Repositories β†’ docker-aws-app. You should see your image listed with a recent push date.

βœ… Checkpoint: Your image is visible in the ECR console with a recent push timestamp.

Step 6: Deploy with Amazon ECS and Fargate

What we're doing: Creating an ECS cluster and deploying your container using Fargate β€” AWS's serverless container compute engine. Fargate runs your containers without you managing any servers.

This step uses the AWS Console. The ECS UI has evolved significantly β€” we'll navigate it step by step.

6.1 β€” Create an ECS Cluster

  1. In the AWS Console, search for ECS and open it
  2. Click Clusters β†’ Create cluster
  3. Cluster name: docker-aws-cluster
  4. Infrastructure: Select AWS Fargate (serverless)
  5. Click Create

Wait for the cluster status to show Active β€” usually under a minute.

6.2 β€” Create a Task Definition

A task definition tells ECS what container to run, how much CPU/memory to give it, and which ports to expose.

  1. In ECS, click Task definitions β†’ Create new task definition
  2. Task definition family: docker-aws-task
  3. Infrastructure: AWS Fargate
  4. OS/Architecture: Linux/X86_64
  5. CPU: .25 vCPU | Memory: .5 GB (smallest option β€” sufficient for this app)
  6. In the Container section:
    • Container name: docker-aws-app
    • Image URI: paste your full ECR URI including :latest
    • Port mappings: Container port 3000, Protocol TCP
  7. Click Create

6.3 β€” Create a Service

A service ensures your task keeps running β€” if the container fails, ECS automatically restarts it.

  1. Go to your cluster (docker-aws-cluster)
  2. Click the Services tab β†’ Create
  3. Launch type: Fargate
  4. Task definition: select docker-aws-task with the latest revision
  5. Service name: docker-aws-service
  6. Desired tasks: 1
  7. In Networking:
    • Select your default VPC
    • Select at least one subnet
    • Create a new security group:
      • Name: docker-aws-sg
      • Inbound rule: Type Custom TCP, Port 3000, Source 0.0.0.0/0
    • Enable Public IP
  8. Click Create

⏳ Wait: ECS provisions the Fargate task β€” this takes 2–5 minutes. You'll see the service status change from PENDING to RUNNING.

6.4 β€” Find your public IP

  1. In your cluster, click the Tasks tab
  2. Click the running task
  3. In the Configuration section, find Public IP
  4. Open http://[your-public-ip]:3000 in your browser

βœ… Final Checkpoint: Your application loads in the browser from the AWS public IP. It's running inside a Docker container on AWS Fargate β€” no servers managed, no infrastructure to maintain.

Troubleshooting: If the page doesn't load, confirm the security group inbound rule allows TCP traffic on port 3000 from 0.0.0.0/0. Check the ECS task logs in the Logs tab of the task detail page for any application errors.

Step 7: Clean Up Resources

Important: To avoid unexpected charges, clean up the resources when you're done.

# Stop the ECS service
aws ecs update-service \
  --cluster docker-aws-cluster \
  --service docker-aws-service \
  --desired-count 0 \
  --region us-east-1

# Delete the service
aws ecs delete-service \
  --cluster docker-aws-cluster \
  --service docker-aws-service \
  --region us-east-1

# Delete the cluster
aws ecs delete-cluster \
  --cluster docker-aws-cluster \
  --region us-east-1

# Delete the ECR repository and images
aws ecr delete-repository \
  --repository-name docker-aws-app \
  --force \
  --region us-east-1

Also deactivate or delete the IAM user's access keys in the IAM console if you don't plan to continue using them.

What You Just Learned

You completed a real containerized cloud deployment. The workflow you followed β€” Dockerfile β†’ build β†’ tag β†’ push to registry β†’ deploy to managed container service β€” is the same workflow used by engineering teams deploying production applications every day.

Skills you practiced:

  • Writing a production-ready Dockerfile with layer caching optimization
  • Building and running Docker containers locally
  • Pushing container images to a private AWS registry
  • Deploying with ECS Fargate β€” serverless container compute
  • AWS IAM β€” creating scoped credentials for programmatic access
  • Networking β€” security groups, public IPs, port mapping

Lo que acabas de hacer no es un ejercicio bΓ‘sico. El flujo Dockerfile β†’ ECR β†’ ECS es exactamente lo que usan equipos de ingenierΓ­a en producciΓ³n. Ya tienes esa experiencia en tus manos.

What to Build Next

Ready to go further?

Keep learning with our Tutorials main page and subscribe to the newsletter for updates on new projects.

Subscribe to Newsletter