-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
none: waiting for apiserver: timed out waiting for the condition (kubelet port conflict) #4500
Comments
Some of this was described in #4473 (comment) There are some extra steps needed on CentOS, which is not (yet) a supported platform. |
Here's the key failure:
Any idea what is listening on port 10250? CentOS should be OK.
I wonder if we used to stop kubelet at startup, but no longer do? |
@tstromberg Hello, thanks for replying! |
@just-another-dude have you by chance killed minikube while it was trying to stop? |
I suspsect this is related to this #4418 |
Hey @medyagh , not that I can remember, anyway what do you recommend I do exactly? |
You can try starting or sotpping minkube with none and then kill it half way with Ctrl+c Btw I merged a PR to the head master. That would fix flaky minikube stops |
I am curious if you have the issue with latest minikube on head master |
@medyagh, Everything seems to work fine right now, thanks! |
The exact command to reproduce the issue:
minikube --vm-driver=none start
The full output of the command that failed:
X Error restarting cluster: waiting for apiserver: timed out waiting for the condition
The output of the
minikube logs
command:==> dmesg <==
dmesg: invalid option -- '='
Usage:
dmesg [options]
Options:
-C, --clear clear the kernel ring buffer
-c, --read-clear read and clear all messages
-D, --console-off disable printing messages to console
-d, --show-delta show time delta between printed messages
-e, --reltime show local time and time delta in readable format
-E, --console-on enable printing messages to console
-F, --file use the file instead of the kernel log buffer
-f, --facility restrict output to defined facilities
-H, --human human readable output
-k, --kernel display kernel messages
-L, --color colorize messages
-l, --level restrict output to defined levels
-n, --console-level set level of messages printed to console
-P, --nopager do not pipe output into a pager
-r, --raw print the raw message buffer
-S, --syslog force to use syslog(2) rather than /dev/kmsg
-s, --buffer-size buffer size to query the kernel ring buffer
-T, --ctime show human readable timestamp (could be
inaccurate if you have used SUSPEND/RESUME)
-t, --notime don't print messages timestamp
-u, --userspace display userspace messages
-w, --follow wait for new messages
-x, --decode decode facility and level to readable string
-h, --help display this help and exit
-V, --version output version information and exit
Supported log facilities:
kern - kernel messages
user - random user-level messages
mail - mail system
daemon - system daemons
auth - security/authorization messages
syslog - messages generated internally by syslogd
lpr - line printer subsystem
news - network news subsystem
Supported log levels (priorities):
emerg - system is unusable
alert - action must be taken immediately
crit - critical conditions
err - error conditions
warn - warning conditions
notice - normal but significant condition
info - informational
debug - debug-level messages
For more details see dmesg(q).
==> kernel <==
06:57:33 up 9:13, 2 users, load average: 0.00, 0.03, 0.09
Linux localhost.localdomain 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
==> kubelet <==
-- Logs begin at Fri 2019-06-14 21:44:20 IDT, end at Sat 2019-06-15 06:57:33 IDT. --
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.322190 5289 server.go:418] Version: v1.14.3
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.322413 5289 plugins.go:103] No cloud provider specified.
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.385224 5289 server.go:629] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.385968 5289 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.385982 5289 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.386077 5289 container_manager_linux.go:286] Creating device plugin manager: true
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.386117 5289 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.386206 5289 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.386216 5289 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.386333 5289 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.386362 5289 kubelet.go:304] Watching apiserver
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.404452 5289 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.404469 5289 client.go:104] Start docker client with request timeout=2m0s
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: E0615 06:57:29.404913 5289 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: E0615 06:57:29.404979 5289 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: E0615 06:57:29.405041 5289 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: W0615 06:57:29.408528 5289 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.408550 5289 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: W0615 06:57:29.408687 5289 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: W0615 06:57:29.411321 5289 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.412859 5289 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.433507 5289 docker_service.go:258] Docker Info: &{ID:JMP2:XNFF:MUHX:IYO6:3VU4:XBGB:4GF2:CDNP:UOY2:MAPQ:XDQ2:QFNV Containers:13 ContainersRunning:0 ContainersPaused:0 ContainersStopped:13 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:40 SystemTime:2019-06-15T06:57:29.413847556+03:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-957.1.3.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000412c40 NCPU:1 MemTotal:1039331328 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:localhost.localdomain Labels:[] ExperimentalBuild:false ServerVersion:18.09.6 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bb71b10fd8f58240ca47fbb579b9d1028eea7c84 Expected:bb71b10fd8f58240ca47fbb579b9d1028eea7c84} RuncCommit:{ID:2b18fe1d885ee5083ef9f0838fee39b62d653e30 Expected:2b18fe1d885ee5083ef9f0838fee39b62d653e30} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default]}
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.433588 5289 docker_service.go:271] Setting cgroupDriver to cgroupfs
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.461977 5289 remote_runtime.go:62] parsed scheme: ""
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.461997 5289 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462016 5289 remote_image.go:50] parsed scheme: ""
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462022 5289 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462267 5289 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 }]
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462285 5289 clientconn.go:796] ClientConn switching balancer to "pick_first"
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462328 5289 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0003a48d0, CONNECTING
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462471 5289 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0003a48d0, READY
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462488 5289 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 }]
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462494 5289 clientconn.go:796] ClientConn switching balancer to "pick_first"
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.462514 5289 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0003a4a60, CONNECTING
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.466078 5289 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0003a4a60, READY
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.472538 5289 kuberuntime_manager.go:210] Container runtime docker initialized, version: 18.09.6, apiVersion: 1.39.0
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: E0615 06:57:29.476327 5289 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.477668 5289 server.go:1054] Started kubelet
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: E0615 06:57:29.478357 5289 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.478829 5289 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.478855 5289 status_manager.go:152] Starting to sync pod status with apiserver
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.478870 5289 kubelet.go:1806] Starting kubelet main sync loop.
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.478880 5289 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.478953 5289 server.go:141] Starting to listen on 0.0.0.0:10250
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: I0615 06:57:29.481613 5289 server.go:343] Adding debug handlers to kubelet server.
Jun 15 06:57:29 localhost.localdomain kubelet[5289]: F0615 06:57:29.482148 5289 server.go:153] listen tcp 0.0.0.0:10250: bind: address already in use
Jun 15 06:57:29 localhost.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 15 06:57:29 localhost.localdomain systemd[1]: Unit kubelet.service entered failed state.
Jun 15 06:57:29 localhost.localdomain systemd[1]: kubelet.service failed.
The operating system version:
CentOS Linux release 7.6.1810 (Core)
The text was updated successfully, but these errors were encountered: