The sample project to deploy Python REST API application, Service, HorizontalPodAutoscaler, Ingress, and BackendConfig on GKE.
PROJECT_ID="sample-project" # replace with your project
COMPUTE_ZONE="us-central1"
gcloud config set project ${PROJECT_ID}
gcloud config set compute/zone ${COMPUTE_ZONE}
Create an Autopilot GKE cluster. It may take around 9 minutes.
gcloud container clusters create-auto sample-cluster --region=${COMPUTE_ZONE}
gcloud container clusters get-credentials sample-cluster
Build and push to GCR:
cd ../app
docker build -t python-ping-api . --platform linux/amd64
docker tag python-ping-api:latest gcr.io/${PROJECT_ID}/python-ping-api:latest
gcloud auth configure-docker
docker push gcr.io/${PROJECT_ID}/python-ping-api:latest
Create and deploy K8s Deployment, Service, HorizontalPodAutoscaler, Ingress, and GKE BackendConfig using the python-ping-api-template.yaml template file.
sed -e "s|<project-id>|${PROJECT_ID}|g" python-ping-api-template.yaml > python-ping-api.yaml
cat python-ping-api.yaml
kubectl apply -f python-ping-api.yaml
It may take around 5 minutes to create a load balancer, including health checking.
Confirm that pod configuration and logs after deployment:
kubectl logs -l app=python-ping-api
kubectl describe pods
kubectl get ingress python-ping-api-ingress
Confirm that response of /ping
API.
LB_IP_ADDRESS=$(gcloud compute forwarding-rules list | grep python-ping-api | awk '{ print $2 }')
echo ${LB_IP_ADDRESS}
curl http://${LB_IP_ADDRESS}/ping
{
"host": "<your-ingress-endpoint-ip>",
"message": "ping-api",
"method": "GET",
"url": "http://<your-ingress-endpoint-ip>/ping"
}
- Loadbalncer
- Loadbalncer Details
kubectl delete -f app/python-ping-api.yaml
gcloud container clusters delete sample-cluster