Home / Uncategorized / Serverless Image Processing Pipeline with AWS ECS and Lambda — SitePoint

Serverless Image Processing Pipeline with AWS ECS and Lambda — SitePoint

Welcome devs to the world of development and automation. Today, we are diving into an exciting project in which we will be creating a Serverless Image Processing Pipeline with AWS services.

The project starts with creating S3 buckets for storing uploaded images and processed Thumbnails, and eventually using many services like Lambda, API Gateway (To trigger the Lambda Function), DynamoDB (storing image Metadata), and at last we will run this program in ECS cluster by creating a Docker image of the project.

This project is packed with cloud services and development tech stacks like Next.js, and practicing this will further enhance your understanding of Cloud services and how they interact with each other. So with further ado, let’s get started!

Note: The code and instructions in this post are for demo use and learning only. A production environment will require a tighter grip on configurations and security.

Prerequisites

Before we get into the project, we need to ensure that we have the following requirements met in our system:

  • An AWS Account: Since we use AWS services for the project, we need an AWS account. A configured IAM User with required services access would be appreciated.
  • Basic Understanding of AWS Services: Since we are dealing with many AWS services, it is better to have a decent understanding of them, such as S3, which is used for storage, API gateway to trigger Lambda function, and many more.
  • Node Installed: Our frontend is built with Next.js, so having Node in your system is necessary.

For Code reference, here is the GitHub repo.

AWS Services Setup

We will start the project by setting up our AWS services. First and foremost, we will create 2 S3 buckets, namely sample-image-uploads-bucket and sample-thumbnails-bucket. The reason for this long name is that the bucket name has to be unique all over the AWS Workspace.

So to create the bucket, head over to the S3 dashboard and click ‘Create Bucket’, select ‘General Purpose’, and give it a name (sample-image-uploads-bucket) and leave the rest of the configuration as default.

Similarly, create the other bucket named sample-thumbnails-bucket, but in this bucket, make sure you uncheck Block Public Access because we will need it for our ECS Cluster.

We need to ensure that the sample-thumbnails-bucket has public read access, so that ECS Frontend can display them. For that, we will attach the following policy to that bucket:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicRead",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::sample-thumbnails-bucket/*"
    }
  ]
}

After creating buckets, let’s move to our Database for storing image metadata. We will create a DynamoDb table for that. Go to your DynamoDb console, click on Create Table, give it a name (image_metadata), and in the primary key select string, name it image_id.

AWS services will communicate with each other, so they need a role with proper permissions. To create a role, go to the IAM dashboard, select Role, and click on Create Role. Under trust identity type, select AWS service, and under use case, choose Lambda. Attach the following policies:

  • AmazonS3FullAccess
  • AmazonDynamoDBFullAccess
  • CloudWatchLogsFullAccess

Give this role a name (Lambda-Image-Processor-Role) and save it.

Creating Lambda Function

We have our Lambda role, buckets, and DynamoDb table ready, so now let’s create the Lambda function which will process the image and make the thumbnail out of it, since we are using the Pillow library to process the images, Lambda by default doesn’t provide that. To fix this, we will add a layer in the Lambda function. To do that, follow the following steps:

Now go to your Lambda dashboard, click on Create a Function. Select Author from Scratch and choose Python 3.9 as the runtime language, give it a name: image-processor, and in the Code tab, you have the Upload from Option, select that, choose zip file, and upload your Zip file of the image-processor.

Go to Configuration, and under the Permissions column, Edit the configuration by changing the existing role to the role we created Lambda-Image-Processor-Role.  

Now go to your S3 bucket (sample-image-uploads-bucket) and go to its Properties section and scroll down to Event Notification, here click on Create Event Notification, give it a name (trigger-image-processor) and in the event type, select PUT and select the lambda function we created (image-processor). 

Now, since Pillow doesn’t come built-in with the lambda library, we will do the following steps to fix that:

  1. Go to your Lambda function (image-processor) and scroll down to the Layer section, here click on Add Layer.
  1. In the Add Layer section, select Specify an ARN and provide this ARN arn:aws:lambda:us-east-1:770693421928:layer:Klayers-p39-pillow:1 . Change the region accordingly; I am using us-east-1. Add the layer. 

Now in the Code tab of your Lambda-Function you would be having a lambda-function.py, put the following content inside the lambda_function.py:

import boto3
import uuid
import os
from PIL import Image
from io import BytesIO
import datetime

s3 = boto3.client('s3')
dynamodb = boto3.client('dynamodb')

UPLOAD_BUCKET = ''  
THUMBNAIL_BUCKET = '' 
DDB_TABLE = 'image_metadata'

def lambda_handler(event, context):
    
    record = event['Records'][0]
    bucket = record['s3']['bucket']['name']
    key = record['s3']['object']['key']

    
    response = s3.get_object(Bucket=bucket, Key=key)
    image = Image.open(BytesIO(response['Body'].read()))

    
    image.thumbnail((200, 200))

    
    thumbnail_buffer = BytesIO()
    image.save(thumbnail_buffer, 'JPEG')
    thumbnail_buffer.seek(0)

    
    thumbnail_key = f"thumb_{key}"
    s3.put_object(
        Bucket=THUMBNAIL_BUCKET,
        Key=thumbnail_key,
        Body=thumbnail_buffer,
        ContentType='image/jpeg'
    )

    
    image_id = str(uuid.uuid4())
    original_url = f"https://{UPLOAD_BUCKET}.s3.amazonaws.com/{key}"
    thumbnail_url = f"https://{THUMBNAIL_BUCKET}.s3.amazonaws.com/{thumbnail_key}"
    uploaded_at = datetime.datetime.now().isoformat()

    dynamodb.put_item(
        TableName=DDB_TABLE,
        Item={
            'image_id': {'S': image_id},
            'original_url': {'S': original_url},
            'thumbnail_url': {'S': thumbnail_url},
            'uploaded_at': {'S': uploaded_at}
        }
    )

    return {
        'statusCode': 200,
        'body': f"Thumbnail created: {thumbnail_url}"
    }

Now, we will need another Lambda function for API Gateway because that will act as the entry point for our frontend ECS app to fetch image data from DynamoDB.

To create the lambda function, go to your Lambda Dashboard, click on create function, select Author from scratch and python 3.9 as runtime, give it a name, get-image-metadata, and in the configuration, select the same role that we assigned to other Lambda functions (Lambda-Image-Processor-Role)

Now, in the Code section of the function, put the following content:

import boto3
import json

dynamodb = boto3.client('dynamodb')
TABLE_NAME = 'image_metadata'

def lambda_handler(event, context):
    try:
        
        response = dynamodb.scan(TableName=TABLE_NAME)

        
        images = []
        for item in response['Items']:
            images.append({
                'image_id': item['image_id']['S'],
                'original_url': item['original_url']['S'],
                'thumbnail_url': item['thumbnail_url']['S'],
                'uploaded_at': item['uploaded_at']['S']
            })

        return {
            'statusCode': 200,
            'headers': {
                "Content-Type": "application/json"
            },
            'body': json.dumps(images)
        }

    except Exception as e:
        return {
            'statusCode': 500,
            'body': f"Error: {str(e)}"
        }

Creating the API Gateway

The API Gateway will act as the entry point for your ECS Frontend application to fetch image data from DynamoDB. It will connect to the Lambda function that queries DynamoDB and returns the image metadata. The URL of the Gateway is used in our Frontend app to display images. To create the API Gateway, do the following steps:

  • Go to the AWS Management Console → Search for API Gateway → Click Create API.
  • Select HTTP API.
  • Click on Build.
  • API name: image-gallery-api
  • Add integrations: Select Lambda and select the get_image_metadata function
  • Select Method: Get and Path: /images
  • Endpoint type: Regional
  • Click on Next and create the API Gateway URL.

Before creating the Frontend, let’s test the application manually. First go to your Upload S3 Bucket (sample-image-uploads-bucket) and upload a jpg/jpeg image; other image will not work as your function only processes these two types:


In the Picture above, I have uploaded an image titled “ghibil-art.jpg” file, and once uploaded, it will trigger the Lambda function, that will create the thumbnail out of it named as “thumbnail-ghibil-art.jpg” and store it in sample-thumbnails-bucket and the information about the image will be stored in image-metadata table in DynamoDb.

In the image above, you can see the Item inside the Explore Item section of our DynamoDb table “image-metadata.” To test the API-Gateway, we will check the Invoke URL of our image-gallery-API followed by /images. It will show the following output, with the curl command:

Now our application is working fine, we can deploy a frontend to visualise the project.

Creating the Frontend App

For the sake of Simplicity, we will be creating a minimal, simple gallery frontend using Next.js, Dockerize it, and deploy it on ECS. To create the app, do the following steps:

Initialization

npx create-next-app@latest image-gallery
cd image-gallery
npm install
npm install axios

Create a new file components/Gallery.js:

'use client';

import { useState, useEffect } from 'react';
import axios from 'axios';
import styles from './Gallery.module.css';

const Gallery = () => {
  const [images, setImages] = useState([]);
  const [loading, setLoading] = useState(true);

  useEffect(() => {
    const fetchImages = async () => {
      try {
        const response = await axios.get('https:///images');
        setImages(response.data);
        setLoading(false);
      } catch (error) {
        console.error('Error fetching images:', error);
        setLoading(false);
      }
    };

    fetchImages();
  }, []);

  if (loading) {
    return div className={styles.loading}>Loading.../div>;
  }

  return (
    div className={styles.gallery}>
      {images.map((image) => (
        div key={image.image_id} className={styles.imageCard}>
          img
            src={image.thumbnail_url}
            alt="Gallery thumbnail"
            width={200}
            height={150}
            className={styles.thumbnail}
          />
          p className={styles.date}>
            {new Date(image.uploaded_at).toLocaleDateString()}
          /p>
        /div>
      ))}
    /div>
  );
};

export default Gallery;

Make Sure to Change the Gateway-URL to your API_GATEWAY_URL

Add CSS Module

Create components/Gallery.module.css:

.gallery {
  display: grid;
  grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
  gap: 20px;
  padding: 20px;
  max-width: 1200px;
  margin: 0 auto;
}
.imageCard {
  background: #fff;
  border-radius: 8px;
  box-shadow: 0 2px 5px rgba(0,0,0,0.1);
  overflow: hidden;
  transition: transform 0.2s;
}
.imageCard:hover {
  transform: scale(1.05);
}
.thumbnail {
  width: 100%;
  height: 150px;
  object-fit: cover;
}
.date {
  text-align: center;
  padding: 10px;
  margin: 0;
  font-size: 0.9em;
  color: #666;
}
.loading {
  text-align: center;
  padding: 50px;
  font-size: 1.2em;
}

Update the Home Page

Modify app/page.js:

import Gallery from '../components/Gallery';

export default function Home() {
  return (
    main>
      h1 style={{ textAlign: 'center', padding: '20px' }}>Image Gallery/h1>
      Gallery />
    /main>
  );
}

Next.js’s built-in Image component

To use Next.js’s built-in Image component for better optimization, update next.config.mjs:


const nextConfig = {
  images: {
    domains: ['sample-thumbnails-bucket.s3.amazonaws.com'],
  },
};

export default nextConfig;

Run the Application

Visit http://localhost:3000 in your browser, and you will see the application running with all the thumbnails uploaded.

For demonstration purposes, I have put four images (jpeg/jpg) in my sample-images-upload-bucket. Through the function, they are transformed into thumbnails and stored in the sample-thumbnail-bucket.

The application looks like this:

Containerising and Creating the ECS Cluster

Now we are almost done with the project, so we will continue by creating a Dockerfile of the project as follows:

# Use the official Node.js image as a base
FROM node:18-alpine AS builder

# Set working directory
WORKDIR /app

# Copy package files and install dependencies
COPY package.json package-lock.json ./
RUN npm install

# Copy the rest of the application code
COPY . .

# Build the Next.js app
RUN npm run build

# Use a lightweight Node.js image for production
FROM node:18-alpine

# Set working directory
WORKDIR /app

# Copy built files from the builder stage
COPY --from=builder /app ./

# Expose port
EXPOSE 3000

# Run the application
CMD ["npm", "start"]

Now we will build the Docker image using:

docker build -t sample-nextjs-app .

Now that we have our Docker image, we will push it to AWS ECR repo, for that, do the following steps:

Step 1: Push the Docker Image to Amazon ECR

  1. Go to the AWS Management Console → Search for ECR (Elastic Container Registry) → Open ECR.
  2. Create a new repository:
    • Click Create repository.
    • Set Repository name (e.g., sample-nextjs-app).
    • Choose Private (or Public if required).
    • Click Create repository.
  3. Push your Docker image to ECR:
    • In the newly created repository, click View push commands.
    • Follow the commands to:
      • Authenticate Docker with ECR.
      • Build, tag, and push your image.
      • You need to have AWS CLI configured for this step.

Step 2: Create an ECS Cluster

aws ecs create-cluster --cluster-name sample-ecs-cluster

Step 3: Create a Task Definition

  1. In the ECS Console, go to Task Definitions.
  2. Click Create new Task Definition.
  3. Choose Fargate → Click Next step.
  4. Set task definition details:
    • Name: sample-nextjs-task
    • Task role: ecsTaskExecutionRole (Create one if missing).
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Allow",
      "Action": [
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability"
      ],
      "Resource": "arn:aws:ecr:us-east-1:624448302051:repository/sample-nextjs-app"
    }
  ]
}
  • Task memory & CPU: Choose appropriate values (e.g., 512MB & 256 CPU).
  1. Define the container:
    • Click Add container.
    • Container name: sample-nextjs-container.
    • Image URL: Paste the ECR image URI from Step 1.
    • Port mappings: Set 3000 for both container and host ports.
    • Click Add.
  2. Click Create.

Step 4: Create an ECS Service

  1. Go to “ECS” → Click Clusters → Select your cluster (sample-ecs-cluster).
  2. Click Create Service.
  3. Choose Fargate → Click Next step.
  4. Set up the service:
    • Task definition: Select sample-nextjs-task.
    • Cluster: sample-ecs-cluster.
    • Service name: sample-nextjs-service.
    • Number of tasks: 1 (Can scale later).
  5. Networking settings:
    • Select an existing VPC.
    • Choose Public subnets.
    • Enable Auto-assign Public IP.
  6. Click Next stepCreate service.

Step 5: Access the Application

  1. Go to ECS > Clusters > sample-ecs-cluster.
  2. Click on the Tasks tab.
  3. Click on the running task.
  4. Find the Public IP under Network.

Open a browser and go to:

http://:3000

Your Next.js app should be live! 🚀

Conclusion

This marks the end of the blog. Today, we divided into many AWS services: s3, IAM, ECR, Lambda function, ECS, Fargate, and API Gateway. We started the project by creating s3 buckets and eventually deployed our application in an ECS cluster.

Throughout this guide, we covered containerizing the Next.js app, pushing it to ECR, configuring ECS task definitions, and deploying via the AWS console. This setup allows for automated scaling, easy updates, and secure API access—all key benefits of a cloud-native deployment. 

Potential production configurations may include changes like below:

  • Implementing more restrictive IAM permissions, improving control over public access to S3 buckets (using CloudFront, pre-signed URLs, or a backend proxy instead of making the sample-thumbnails-bucket public)
  • Adding error handling and pagination (especially for DynamoDB queries)
  • Utilizing secure VPC/network configurations for ECS (like using an Application Load Balancer and private subnets instead of direct public IPs)
  • Addressing scaling concerns by replacing the DynamoDB.scan operation within the metadata-fetching Lambda with the DynamoDB.query
  • Using environment variables instead of a hardcoded API gateway URL in the Next.js code

Leave a Reply

Your email address will not be published. Required fields are marked *