A full production backend API built with these tech stacks:
- REST API: Django and Django REST Framework.
- Database: PostgresSQL.
- Unit Testing: Pytest.
- Packaging Management: Poetry.
- Containerization: Docker and Docker Compose.
- Cloud Provider: GCP (Google Cloud):
- Google Cloud Compute Engine.
- Google Cloud Storage.
- Google Cloud SQL.
- Google Cloud Container Registry.
- Infrastructure as Code: Terraform.
- CI/CD: Jenkins.
- Version Control: Git and GitHub.
Set the environment variables:
- Copy
backend/.env.sample/
folder and rename it tobackend/.env/
.
Run the base environment locally:
- Update the
backend/.env/.env.base
file. - Run Docker Compose:
docker compose -f backend/.docker-compose/base.yml up -d --build
- Run Pytest:
docker exec -it soundwav_base_django /bin/bash -c "/opt/venv/bin/pytest"
Run the production environment locally:
- Get the environment variables from the infrastructure:
python scripts/get_infra_output.py --compose=infrastructure/.docker-compose.yml --module=gcp
- Update the
backend/.env/.env.production
file. - Run Docker Compose:
docker compose -f backend/.docker-compose/production.yml up -d --build
Setup Terraform Backend:
- Create a new project on Google Cloud Platform.
- Create a service account and download the service key JSON file and rename it to
.gcp_creds.json
. - Create a storage on Google Cloud Storage.
- Create a file and name it to
.backend.hcl
underinfrastructure
folder. - Copy the content of file
.backend.hcl.sample
inside it and fill the values.
Setup Secrets:
- Create a file with the name
.secrets.auto.tfvars
underinfrastructure
folder. - Copy the contents of file
.secrets.auto.tfvars.sample
inside it and fill the values.
Setup SSH:
- Generate an SSH Key.
- Create a folder with the name
.ssh
underinfrastructure
folder. - Copy
id_rsa.pub
andid_rsa
file toinfrastructure/.ssh
.
Run Terraform Commands:
-
terraform init
docker compose -f infrastructure/.docker-compose.yml run --rm terraform init -backend-config=.backend.hcl
-
terraform plan all
docker compose -f infrastructure/.docker-compose.yml run --rm terraform plan
-
terraform plan gcp
docker compose -f infrastructure/.docker-compose.yml run --rm terraform plan -target="module.gcp"
-
terraform apply all
docker compose -f infrastructure/.docker-compose.yml run --rm terraform apply --auto-approve
-
terraform apply gcp
docker compose -f infrastructure/.docker-compose.yml run --rm terraform apply -target="module.gcp" --auto-approve
-
terraform destroy all
docker compose -f infrastructure/.docker-compose.yml run --rm terraform destroy --auto-approve
-
terraform destroy gcp
docker compose -f infrastructure/.docker-compose.yml run --rm terraform destroy -target="module.gcp" --auto-approve
-
terraform output gcp
docker compose -f infrastructure/.docker-compose.yml run --rm terraform output gcp
-
Create the GCP resources by following the infrastructure section.
-
Export values and change them according to your infrastructure:
export KEY_FILE=.gcp_creds.json; export KEY_TYPE=_json_key; export HOSTNAME=gcr.io; export PROJECT_ID=soundwav; export IMG_NAME=soundwav; export IMG_TAG=latest; export FINAL_IMAGE=$HOSTNAME/$PROJECT_ID/$IMG_NAME:$IMG_TAG; export ENVIRONMENT=production; export INSTANCE_USER=<YOUR_INSTANCE_USER>; export INSTANCE_IP=<YOUR_INSTANCE_IP>;
-
Login to GCP Container Registry:
cat $KEY_FILE | docker login -u $KEY_TYPE --password-stdin https://$HOSTNAME
-
Build a Docker image:
docker build -t $FINAL_IMAGE -f backend/Dockerfile backend --build-arg ENVIRONMENT=$ENVIRONMENT
-
Push the Docker image to GCP Container Registry:
docker push $FINAL_IMAGE
-
Copy the env file and the run script to the server:
rsync backend/.gcp_creds.json backend/.env/.env.$ENVIRONMENT scripts/run_backend.py $INSTANCE_USER@$INSTANCE_IP:/home/$INSTANCE_USER
-
Login to GCP Container Registry on the server:
ssh $INSTANCE_USER@$INSTANCE_IP "cat $KEY_FILE | docker login -u $KEY_TYPE --password-stdin https://$HOSTNAME"
-
Run the script on the server:
ssh $INSTANCE_USER@$INSTANCE_IP "python3 run_backend.py --env=.env.$ENVIRONMENT --image=$FINAL_IMAGE"