mfkimbell / django-serverless-webapp

Django web application that uses postgres to manage user data and uses amazon S3 to store static files. The application was containerized and the image was uploaded to ECR. This image was then used to deploy the web app on ECS Fargate serverless launch type with https and custom domain.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

django-serverless-webapp

django-map

Tools used:

  • AWS CLI - Programatically upload files
  • Boto3 - Programatically access S3 bucket in Django application
  • Certificate Manager - SSL certificate for using 'https'
  • Django - Python framework for building the web application
  • Django-environ - Setting up enviornment variables for secret values
  • Django-storages - Allows for uploading files to s3
  • Docker - Containerization of application
  • ECR - Container registry for storing application image
  • ECS - Container management
  • Fargate - Serverless launch type for running docker container
  • Gunicorn - Wsgi http server used to build connection between django application and AWS
  • Postgres - Managing user login information
  • Psycopg2 - Managing postgres database
  • RDS - Replacing default sqlite database with PostgreSQL database on AWS
  • Route53 - Custom domain names for webapp

To Run Locally:

(for me personally, I had to run Set-ExecutionPolicy -ExecutionPolicy Unrestricted to execute the active script on Windows11, or you can open powershell as admin)

cd django/simply/simply
virtualenv venv
pip install -r requirements.txt
cd venv/Scripts
activate (and simply write `deactivate` to close down the virtualenv)
python manage.py runserver

And you can write deactivate to shut down the virtual environment

Webapp demo:

django display

AWS Implementation:

-created user group "developer" with AdminAccess, IAMChangePasswordAccess, AmazonEC2ContainerRegistryFullAccess, and AmazonECS_FullAccess as the permission policies. image

-created user "mitchell" with access to that role

-created account alias "mitchell-django"

-new sign in link "https://mitchell-django.signin.aws.amazon.com/console"

-create access key for IAM user

-install aws cli

I create a bucket policy so that anyone can call getObject in my S3 bucket:

{
  "Id": "Policy1704489125788",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1704489124269",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::django-static-151/*",
      "Principal": "*"
    }
  ]
}

-in the settings.py of my django application, I add access keys and custom domain

AWS_ACCESS_KEY_ID = "AKIAW3MEFFIRVJ2I7SLG"
AWS_SECRET_ACCESS_KEY = "****************************"
AWS_STORAGE_BUCKET_NAME = 'django-static-151'
STORAGES = {"staticfiles": {"BACKEND": "storages.backends.s3boto3.S3StaticStorage"}}
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME

I manually added these files, but it can be done programmatically:

image

Here we can see my static files are coming from AWS:

image

Django utlizies sqlite database by default, so I change it to PostgreSQL.

DATABASES = {

    'default': {

        'ENGINE': 'django.db.backends.postgresql',

        'NAME': 'demo_db', 

        'USER': 'mfkimbell',

        'PASSWORD': 'password',

        'HOST' : 'database-1.cpkm2gqmy8u4.us-east-2.rds.amazonaws.com',

        'PORT': '5432',

    }

}

Creating admin, then logging into admin on webapp. I personally never utilized these features, but it can be used to manage data in the django application:

image

bought domain name mitchell-django.net image Registered the domains in Certificate Manager image Added CNAME records to Route53 image

and we add the domain to our settings.py

ALLOWED_HOSTS = ['www.mitchell-django.com','mitchell-django.com','*']
CSRF_TRUSTED_ORIGINS = ['https://www.mitchell-django.com', 'https://mitchell-django.com']

Now we create the Dockerfile: -I add PYTHONUNBUFFERED=1 since it will send python output to our container logs -I also specify a port number so it can be accessed outside the container

FROM --platform=linux/amd64 python:3.11-bullseye

ENV PYTHONUNBUFFERED=1

WORKDIR /simply

COPY requirements.txt .

RUN pip3 install -r requirements.txt

COPY . .

CMD python manage.py runserver 0.0.0.0:8000

I then run

docker build -t simply2 .
docker run -p 8888:8000 simply2

Here we can see the application successfully running on the container: image

Next I upload my container to ECR with the following commands:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 471112952355.dkr.ecr.us-east-2.amazonaws.com
docker tag simply2:latest 471112952355.dkr.ecr.us-east-1.amazonaws.com/my-first-repository:latest
docker push 471112952355.dkr.ecr.us-east-1.amazonaws.com/my-first-repository:latest

Fargate uses dynamic IPs, that's why we use '*'.

-ECS tasks come from "task definitions", which are json templates that tell it what to do

I use an application load balancer because this applicaiton uses https.

I make a Security Group DemoAppLB-SG, which I attack to a Load Balancer DemoAppLB. I add to the Load Balancer's listener Target Group DemoAppTG and I direct to IP since I plan to use Fargate. I specify the Target Group to direct to HTTP port 8000 so our Load Balancer can communicate to the docker container (which if you remember, runs on port 8000).

If you want to enable session cookies and cache user sessions you just need to enable "sticky sessions" on the Target Group.

Load Balancer --> Listener --> Target Group --> Application

image

Currently, I have it set up so that anyone trying to connect via HTTP will be automatically redirected to HTTPS:

image

We can see the target group on port 8000:

image

And we can see all of the Route53 routing:

image

The Type A records point the the Load Balancer, and our CNAME records are used for SSL certificate validation.

Now I need to create a Task Definition. I create a container named DemoAppContainer and link it to the Container URI that I uploaded to ECR.

image

I also create an ECS cluster DemoAppCluster. Each instance in a cluster is called a node. Clusters can contain a mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. You can monitor the creation of the cluster in CloudFormation.

A Service is used to guarantee that you always have some number of Tasks running at all times. If a Task's container exits due to an error, or the underlying EC2 instance fails and is replaced, the ECS Service will replace the failed Task. This is why we create Clusters so that the Service has plenty of resources in terms of CPU, Memory and Network ports to use. To us it doesn't really matter which instance Tasks run on so long as they run. A Service configuration references a Task definition. A Service is responsible for creating Tasks. I create a service called DemoAppService which will use the Load Balancer and Target groups previously created.

image

We can see the launch type is specifically Fargate: image

And here we can see my application running on www.mitchell-django.net:

image

A

About

Django web application that uses postgres to manage user data and uses amazon S3 to store static files. The application was containerized and the image was uploaded to ECR. This image was then used to deploy the web app on ECS Fargate serverless launch type with https and custom domain.


Languages

Language:Python 65.5%Language:HTML 32.0%Language:Dockerfile 1.5%Language:CSS 0.8%Language:JavaScript 0.2%