-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm install of GPU operator doesn't run daemonset containers and validator containers #434
Comments
@premmotgi looks like NFD worker pods are not able to connect to master pods.
This is required for the NFD workers to pass required GPU labels to be applied on the node. GPU Operator will depend on these labels to create additional operand pods. Which CNI are you using and can you check for the CNI errors causing this? |
@shivamerla Thanks for your quick response. I am using Calico CNI. I checked if its working fine and seems like the CNI doesnt have any issues. Below is the output: [root@control01 ~]# kubectl create deployment pingtest --image=busybox --replicas=3 -- sleep infinity --- 172.21.78.224 ping statistics --- |
Previously I had GPU drivers installed on the nodes. I did uninstall and did nvidia purge to remove all installation before installing gpu-operator. Is there any issue with installing gpu-operator after uninstalling the drivers? |
I am having the same problem. |
Looks like calico related: #401 (comment) |
This issue is solved after using docker-shim |
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
1. Quick Debug Checklist
i2c_core
andipmi_msghandler
loaded on the nodes?kubectl describe clusterpolicies --all-namespaces
)1. Issue or feature description
Once the helm install command is run for GPU operator, only the discovery pods are running but the gpu-operator daemonset and validation pods are not running.
2. Steps to reproduce the issue
3. Information to attach (optional if deemed irrelevant)
kubernetes pods status:
kubectl get pods --all-namespaces
[root@control01 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-release-enterprise-steam-867df478d5-4x296 1/1 Running 0 4d23h
gpu-operator gpu-operator-7878f5869-mfnzc 1/1 Running 0 4d3h
gpu-operator gpu-operator-node-feature-discovery-master-59b4b67f4f-nsgpk 1/1 Running 0 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-7plcj 0/1 CrashLoopBackOff 975 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-8b9kq 0/1 CrashLoopBackOff 975 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-hh2zn 0/1 CrashLoopBackOff 975 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-r5jlv 0/1 CrashLoopBackOff 975 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-s8rlb 0/1 CrashLoopBackOff 974 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-sc9x2 0/1 CrashLoopBackOff 975 4d3h
gpu-operator gpu-operator-node-feature-discovery-worker-v9j7c 0/1 CrashLoopBackOff 975 4d3h
kubernetes daemonset status:
kubectl get ds --all-namespaces
If a pod/ds is in an error state or pending state
kubectl describe pod -n NAMESPACE POD_NAME
If a pod/ds is in an error state or pending state
kubectl logs -n NAMESPACE POD_NAME
[root@control01 ~]# kubectl logs gpu-operator-node-feature-discovery-worker-v9j7c -n gpu-operator
I1107 21:48:04.503798 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I1107 21:48:04.503857 1 nfd-worker.go:156] NodeName: 'worker03.robin.ai.lab'
I1107 21:48:04.504345 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I1107 21:48:04.504407 1 nfd-worker.go:461] worker (re-)configuration successfully completed
I1107 21:48:04.504441 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I1107 21:48:04.504475 1 component.go:36] [core]parsed scheme: ""
I1107 21:48:04.504480 1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I1107 21:48:04.504495 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 0 }] }
I1107 21:48:04.504503 1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I1107 21:48:04.504507 1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I1107 21:48:04.504527 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I1107 21:48:04.504551 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I1107 21:48:04.504594 1 component.go:36] [core]Channel Connectivity change to CONNECTING
W1107 21:48:24.505843 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 172.19.100.15:8080: i/o timeout". Reconnecting...
I1107 21:48:24.505867 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I1107 21:48:24.505890 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I1107 21:48:25.505942 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I1107 21:48:25.505963 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I1107 21:48:25.506042 1 component.go:36] [core]Channel Connectivity change to CONNECTING
W1107 21:48:45.506557 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 172.19.100.15:8080: i/o timeout". Reconnecting...
I1107 21:48:45.506586 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I1107 21:48:45.506611 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I1107 21:48:47.031218 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I1107 21:48:47.031247 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I1107 21:48:47.031372 1 component.go:36] [core]Channel Connectivity change to CONNECTING
I1107 21:49:04.505752 1 component.go:36] [core]Channel Connectivity change to SHUTDOWN
I1107 21:49:04.505778 1 component.go:36] [core]Subchannel Connectivity change to SHUTDOWN
F1107 21:49:04.505796 1 main.go:64] failed to connect: context deadline exceeded
[root@control01 ~]# kubectl logs gpu-operator-node-feature-discovery-worker-7plcj -n gpu-operator
I1107 21:59:22.763062 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I1107 21:59:22.763113 1 nfd-worker.go:156] NodeName: 'worker08.robin.ai.lab'
I1107 21:59:22.763500 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I1107 21:59:22.763555 1 nfd-worker.go:461] worker (re-)configuration successfully completed
I1107 21:59:22.763586 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I1107 21:59:22.763618 1 component.go:36] [core]parsed scheme: ""
I1107 21:59:22.763627 1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I1107 21:59:22.763646 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 0 }] }
I1107 21:59:22.763662 1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I1107 21:59:22.763666 1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I1107 21:59:22.763682 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I1107 21:59:22.763701 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I1107 21:59:22.763784 1 component.go:36] [core]Channel Connectivity change to CONNECTING
W1107 21:59:42.765933 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 172.19.100.15:8080: i/o timeout". Reconnecting...
I1107 21:59:42.765964 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I1107 21:59:42.766005 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I1107 21:59:43.766068 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I1107 21:59:43.766079 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I1107 21:59:43.766123 1 component.go:36] [core]Channel Connectivity change to CONNECTING
W1107 22:00:03.766610 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 172.19.100.15:8080: i/o timeout". Reconnecting...
I1107 22:00:03.766635 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I1107 22:00:03.766666 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I1107 22:00:05.644606 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I1107 22:00:05.644626 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I1107 22:00:05.644723 1 component.go:36] [core]Channel Connectivity change to CONNECTING
I1107 22:00:22.765866 1 component.go:36] [core]Channel Connectivity change to SHUTDOWN
I1107 22:00:22.765903 1 component.go:36] [core]Subchannel Connectivity change to SHUTDOWN
F1107 22:00:22.765921 1 main.go:64] failed to connect: context deadline exceeded
Output of running a container on the GPU machine:
docker run -it alpine echo foo
[root@worker08 ~]# docker run -it alpine echo foo
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
213ec9aee27d: Already exists
Digest: sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad
Status: Downloaded newer image for alpine:latest
foo
Docker configuration file:
cat /etc/docker/daemon.json
Docker runtime configuration:
docker info | grep runtime
NVIDIA shared directory:
ls -la /run/nvidia
NVIDIA packages directory:
ls -la /usr/local/nvidia/toolkit
NVIDIA driver directory:
ls -la /run/nvidia/driver
kubelet logs
journalctl -u kubelet > kubelet.logs
The text was updated successfully, but these errors were encountered: