Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dashboard: show kubectl describe output when deployment fails #4749

Closed
Avy4140 opened this issue Jul 13, 2019 · 18 comments
Closed

dashboard: show kubectl describe output when deployment fails #4749

Avy4140 opened this issue Jul 13, 2019 · 18 comments
Labels
co/dashboard dashboard related issues good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@Avy4140
Copy link

Avy4140 commented Jul 13, 2019

when I enter the command minikube dashboard this is the response Im getting:

Asarma-M15MBP:~ asarma$ minikube dashboard
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
💣 http://127.0.0.1:50411/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Asarma-M15MBP:~ asarma$
I dont know whats the issue, its the first time Im installing Minikube.

@tstromberg
Copy link
Contributor

Could you attach the output of:

kubectl get po -A
minikube logs

Thanks!

@Avy4140
Copy link
Author

Avy4140 commented Jul 13, 2019

Last login: Fri Jul 12 20:05:02 on ttys000
Asarma-M15MBP:~ asarma$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default extended-resource-demo 0/1 Pending 0 150m
default extended-resource-demo-2 0/1 Pending 0 149m
default hello-node-55b49fb9f8-k4xrc 1/1 Running 1 3h8m
default kubernetes-bootcamp-5b48cfdcbd-2dp2s 1/1 Running 1 173m
default nginx-deployment-7448597cd5-74v2g 1/1 Running 1 3h1m
default nginx-deployment-7448597cd5-7jqzg 1/1 Running 1 3h1m
default nginx-deployment-7448597cd5-7vc5c 1/1 Running 1 3h1m
default nginx-deployment-7448597cd5-hgrw8 1/1 Running 2 3h1m
default nginx-deployment-7448597cd5-j88d7 1/1 Running 1 3h1m
default nginx-deployment-7448597cd5-n5k9w 1/1 Running 1 3h3m
default nginx-deployment-7448597cd5-n9t42 1/1 Running 1 3h3m
default nginx-deployment-7448597cd5-rgn4c 1/1 Running 1 3h3m
default nginx-deployment-79878fbcb6-69wkw 0/1 ErrImagePull 0 3h
default nginx-deployment-79878fbcb6-bvms9 0/1 ImagePullBackOff 0 3h
default nginx-deployment-79878fbcb6-gbc4h 0/1 ImagePullBackOff 0 3h
default nginx-deployment-79878fbcb6-mxsp4 0/1 ImagePullBackOff 0 3h
default nginx-deployment-79878fbcb6-sfrll 0/1 ImagePullBackOff 0 3h
default redis 1/1 Running 2 148m
default task-pv-pod 1/1 Running 2 142m
kube-system coredns-5c98db65d4-bnqtq 0/1 CrashLoopBackOff 17 3h13m
kube-system coredns-5c98db65d4-pd8gh 0/1 CrashLoopBackOff 17 3h13m
kube-system etcd-minikube 1/1 Running 0 32m
kube-system heapster-g77xm 1/1 Running 1 3h5m
kube-system influxdb-grafana-fzkpw 2/2 Running 2 3h5m
kube-system kube-addon-manager-minikube 1/1 Running 10 3h12m
kube-system kube-apiserver-minikube 1/1 Running 0 32m
kube-system kube-controller-manager-minikube 1/1 Running 2 29m
kube-system kube-proxy-bz4tj 1/1 Running 2 3h13m
kube-system kube-scheduler-minikube 1/1 Running 6 3h12m
kube-system kubernetes-dashboard-7b8ddcb5d6-sg298 0/1 CrashLoopBackOff 28 3h9m
kube-system metrics-server-84bb785897-hdv9w 0/1 ImagePullBackOff 26 3h13m
kube-system storage-provisioner 0/1 Error 11 3h13m
mem-example memory-demo 1/1 Running 1 158m
mem-example memory-demo-2 0/1 ImagePullBackOff 20 156m
mem-example memory-demo-3 0/1 Pending 0 155m
qos-example qos-demo 1/1 Running 1 153m
qos-example qos-demo-2 1/1 Running 1 152m
qos-example qos-demo-3 1/1 Running 1 151m
qos-example qos-demo-4 2/2 Running 2 151m
Asarma-M15MBP:~ asarma$ minikube logs

💣 command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: ssh: handshake failed: read tcp 127.0.0.1:51959->127.0.0.1:55038: read: connection reset by peer

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
Asarma-M15MBP:~ asarma$

@Avy4140
Copy link
Author

Avy4140 commented Jul 13, 2019

my minikube crashed

@tstromberg
Copy link
Contributor

tstromberg commented Jul 13, 2019

Given the number of pods you have, it's probably that your minikube environment ran out of resources, specifically RAM. You can increase it by using --memory. The default is only 2GB, you may want 8GB+ for all the addons that I see enabled there.

If the apiserver is still running, do you mind sharing the output of:

kubectl describe pod kubernetes-dashboard-7b8ddcb5d6-tvgc4 -n kube-system

@Avy4140
Copy link
Author

Avy4140 commented Jul 13, 2019

Asarma-M15MBP:~ asarma$ kubectl describe pod kubernetes-dashboard-7b8ddcb5d6-tvgc4 -n kube-system
Error from server (NotFound): pods "kubernetes-dashboard-7b8ddcb5d6-tvgc4" not found

@tstromberg
Copy link
Contributor

Sorry, bad cut and paste. The pod name in your example is different. Try:

kubectl describe pod kubernetes-dashboard-7b8ddcb5d6-sg298 -n kube-system

I'll be curious to see what it says.

Since minikube logs is failing due to ssh problems, your next step is to run minikube delete, then minikube start with more --memory and possibly a higher --cpus count.

@Avy4140
Copy link
Author

Avy4140 commented Jul 13, 2019

This is the output:

$ kubectl describe pod kubernetes-dashboard-7b8ddcb5d6-sg298 -n kube-system
Name:           kubernetes-dashboard-7b8ddcb5d6-sg298
Namespace:      kube-system
Priority:       0
Node:           minikube/10.0.2.15
Start Time:     Fri, 12 Jul 2019 17:30:57 -0400
Labels:         addonmanager.kubernetes.io/mode=Reconcile
                app=kubernetes-dashboard
                pod-template-hash=7b8ddcb5d6
                version=v1.10.1
Annotations:    <none>
Status:         Running
IP:             172.17.0.18
Controlled By:  ReplicaSet/kubernetes-dashboard-7b8ddcb5d6
Containers:
  kubernetes-dashboard:
    Container ID:  docker://480610be5444a037b30f16b958c607b029781c6e5848eb74ebc41307e1eeba2e
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:      docker://sha256:f9aed6605b814b69e92dece6a50ed1e4e730144eb1cc971389dde9cb3820d124
    Port:          9090/TCP
    Host Port:     0/TCP
    Args:
      --disable-settings-authorizer
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 12 Jul 2019 20:50:01 -0400
      Finished:     Fri, 12 Jul 2019 20:50:04 -0400
    Ready:          False
    Restart Count:  30
    Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lfgdk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-lfgdk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lfgdk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                     From               Message
  ----     ------          ----                    ----               -------
  Warning  BackOff         141m (x288 over 3h21m)  kubelet, minikube  Back-off restarting failed container
  Warning  FailedMount     44m (x2 over 44m)       kubelet, minikube  MountVolume.SetUp failed for volume "default-token-lfgdk" : couldn't propagate object cache: timed out waiting for the condition
  Normal   SandboxChanged  44m                     kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Created         41m (x4 over 44m)       kubelet, minikube  Created container kubernetes-dashboard
  Normal   Started         41m (x4 over 44m)       kubelet, minikube  Started container kubernetes-dashboard
  Normal   Pulled          24m (x8 over 44m)       kubelet, minikube  Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Warning  BackOff         4m44s (x165 over 44m)   kubelet, minikube  Back-off restarting failed container
Asarma-M15MBP:~ asarma$ 

@Avy4140
Copy link
Author

Avy4140 commented Jul 13, 2019

Thank you so much!! Its working now. I increased the memory and higher Cpus count.

@afbjorklund
Copy link
Collaborator

Something like #3574 would probably have helped, to diagnose

@afbjorklund afbjorklund added the kind/support Categorizes issue or PR as a support question. label Jul 13, 2019
@tstromberg tstromberg changed the title Minikube Dashboard Issue dashboard: unexpected response code: 503 when cluster resources are exceeded Jul 16, 2019
@tstromberg tstromberg added co/dashboard dashboard related issues needs-solution-message Issues where where offering a solution for an error would be helpful and removed kind/support Categorizes issue or PR as a support question. labels Jul 16, 2019
@tstromberg tstromberg changed the title dashboard: unexpected response code: 503 when cluster resources are exceeded dashboard: unexpected response code: 503 when resources are exceeded Jul 16, 2019
@tstromberg
Copy link
Contributor

Thanks for confirming the issue. We should definitely make the dashboard implementation more robust toward resource exhaustion issues. It would be nice if minikube gave a better hint here.

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 16, 2019
@krishnamanaiducloud
Copy link

krishnamanaiducloud commented Aug 9, 2019

$kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

$kubectl delete pod kubernetes-dashboard-pod -n kube-system

----> Run the first command to create role binding
-------> And then delete the Dash board using second command

and then
$minikube dashboard

@tstromberg tstromberg changed the title dashboard: unexpected response code: 503 when resources are exceeded dashboard: show kubectl describe output when deployment fails Sep 19, 2019
@tstromberg tstromberg added kind/feature Categorizes issue or PR as related to a new feature. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. and removed needs-solution-message Issues where where offering a solution for an error would be helpful labels Sep 19, 2019
@tstromberg
Copy link
Contributor

I am magically turning this into an actionable feature request.

@priyawadhwa
Copy link

priyawadhwa commented Dec 16, 2019

If anyone is interested in working on this issue, feel free to assign yourself by commenting /assign. This would be really useful for debugging dashboard issues.

Update: We've also updated the dashboard version and now we pre-cache it, so this error is less likely to happen.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2020
@prasadkatti
Copy link
Contributor

Is this also perhaps related to #7105 ?

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 22, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/dashboard dashboard related issues good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

8 participants