Performance testing on GKE using the https://echo.labstack.com application.
- 1. Create a GKE cluster
- 2. Deploy two applications for checking the performance per Pod and scaling
- 3. Performance Testing
- Cleanup
- Install the gcloud CLI
- Install kubectl and configure cluster access
- Installing and Upgrading for the Taurus
COMPUTE_ZONE="us-central1"
# replace with your project
PROJECT_ID="sample-project"
gcloud config set project ${PROJECT_ID}
gcloud config set compute/zone ${COMPUTE_ZONE}
Create an Autopilot GKE cluster. It may take around 9 minutes.
gcloud container clusters create-auto sample-cluster --region=${COMPUTE_ZONE}
gcloud container clusters get-credentials sample-cluster
Build and push to GCR:
cd app
docker build -t go-echo-api . --platform linux/amd64
docker tag go-echo-api:latest gcr.io/${PROJECT_ID}/go-echo-api:latest
gcloud auth configure-docker
docker push gcr.io/${PROJECT_ID}/go-echo-api:latest
kubectl get namespaces
kubectl create namespace echo-test
Two deployments may take around 5 minutes to create a load balancer, including health checking.
To check request per seconds(RPS) WITHOUT scaling, create and deploy K8s Deployment, Service, HorizontalPodAutoscaler, Ingress, and GKE BackendConfig using the go-echo-api-onepod-template.yaml template file:
sed -e "s|<project-id>|${PROJECT_ID}|g" go-echo-api-onepod-template.yaml > go-echo-api-onepod.yaml
cat go-echo-api-onepod.yaml
kubectl get namespaces
kubectl apply -f go-echo-api-onepod.yaml -n echo-test --dry-run=client
Confirm Pod logs and configuration after deployment:
kubectl logs -l app=go-echo-api-onepod -n echo-test
kubectl describe pods -n echo-test
kubectl get ingress go-echo-api-onepod-ingress -n echo-test
To check request per seconds(RPS) with scaling, create and deploy K8s Deployment, Service, HorizontalPodAutoscaler, Ingress, and GKE BackendConfig using the go-echo-api-template.yaml template file:
sed -e "s|<project-id>|${PROJECT_ID}|g" go-echo-api-template.yaml > go-echo-api.yaml
cat go-echo-api.yaml
kubectl apply -f go-echo-api.yaml -n echo-test --dry-run=client
kubectl apply -f go-echo-api.yaml -n echo-test
Confirm Pod logs and configuration after deployment:
kubectl logs -l app=go-echo-api -n echo-test
kubectl describe pods -n echo-test
kubectl get ingress go-echo-api-ingress -n echo-test
Confirm that response of /
API.
LB_IP_ADDRESS=$(gcloud compute forwarding-rules list | grep go-echo-api | awk '{ print $2 }')
echo ${LB_IP_ADDRESS}
curl http://${LB_IP_ADDRESS}/
https://gettaurus.org/install/Installation/
sudo apt-get update -y
sudo apt-get install python3 default-jre-headless python3-tk python3-pip python3-dev libxml2-dev libxslt-dev zlib1g-dev net-tools -y
sudo python3 -m pip install bzt
sudo apt-get install htop -y
cd test
# test with 300 threads and connection:close option
bzt echo-bzt-onepod.yaml
kubectl describe hpa go-echo-api-onepod-hpa -n echo-test
kubectl get hpa go-echo-api-onepod-hpa -n echo-test -w
cd test
# test with 2000 threads and connection:close option
bzt echo-bzt.yaml
kubectl describe hpa go-echo-api-hpa -n echo-test
kubectl get hpa go-echo-api-hpa -n echo-test -w
kubectl scale deployment go-echo-api-onepod -n echo-test --replicas=0
kubectl scale deployment go-echo-api -n echo-test --replicas=0
kubectl delete -f app/go-echo-api-onepod.yaml -n echo-test
kubectl delete -f app/go-echo-api.yaml -n echo-test