docker-engine
for executing thekubernetes-anywhere
deployment which can be downloaded here.make
for entering the deployment environment.
Note: The deployment is tested with kubernetes v1.4.0 and v1.4.4.
You must upload template to vCenter before deploying kubernetes.
- Login to vSphere Client.
- Right-Click on ESX host on which you want to deploy template.
- Select
Deploy OVF template
. - Copy and paste URL for OVA.
- Follow next steps according to instructions mentioned in wizard.
You can also upload ova using govc.
Note: This OVA is based on Photon OS(v1.0) with virtual hardware v11.
git clone https://github.com/kubernetes/kubernetes-anywhere
cd kubernetes-anywhere
make docker-dev
make deploy
and fill complete the config wizard to deploy a kubernetes-anywhere cluster.
Notes:
-
To properly boot a cluster in vSphere, you MUST set these values in the wizard:
* phase2.installer_container = "docker.io/ashivani/k8s-ignition:v4"
-
To change configuration, run:
make config .config
-
The deployment is configured to use DHCP.
You have a Kubernetes cluster!
Notes If you want to launch another cluster while keeping existing one then clone the kubernetes-anywhere and follow the steps above.
First set KUBECONFIG to access cluster using kubectl:
export KUBECONFIG=phase1/vsphere/.tmp/kubeconfig.json
You will get cluster information when you run:
kubectl cluster-info
After you've had a great experience with Kubernetes, run:
$ make destroy
to tear down your cluster.
-
make destroy
is flaky.Terraform fails to destroy VM's and remove the state for existing cluster.
- Workaround:
In vSphere Client,
- Stop all VM's that are setup by kubernetes-anywhere.
- Right-Click on VM and select
Delete from Disk
. - Run
make clean
.
- Workaround:
In vSphere Client,
If no nodes are available, there was likely a provisioning failure on the master (either in vSphere or in the ignition
provisioning container).
The following steps will help in troubleshooting:
- SSH to the master.
- Use the following command to upload relevant logs:
journalctl -u kubelet
- Attach the logs to a new Issue in this repository.
- Use
kubectl get nodes
to identify the missing nodes. - Use vSphere Client or
govc
to find the node and the node's IP address. - SSH to the master, then to the missing node
- Use the following command to upload relevant logs:
journalctl -u kubelet
- Attach the logs to a new Issue in this repository.
This was be mostly likely flannel failure.
- Use
kubectl describe pod dashboard-pod-name
to identify the node on which dashboard pod is scheduled. - Use vSphere Client or
govc
to find the node and the node's IP address. - SSH to the node.
- Use the following command on node to upload relevant logs:
journalctl -u flannelc
- Attach the logs to a new Issue in this repository.