Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FailedTest: TestMultiNode/serial/RestartMultiNode kubelet not running after restart #9456

Closed
medyagh opened this issue Oct 13, 2020 · 3 comments · Fixed by #9476
Closed

FailedTest: TestMultiNode/serial/RestartMultiNode kubelet not running after restart #9456

medyagh opened this issue Oct 13, 2020 · 3 comments · Fixed by #9476
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test.

Comments

@medyagh
Copy link
Member

medyagh commented Oct 13, 2020

as seen on virtualbox on macos GH action https://github.com/kubernetes/minikube/pull/9454/checks?check_run_id=1245044220

also as seen in KVM https://storage.googleapis.com/minikube-builds/logs/9432/315169e/KVM_Linux.html

-- stdout --
	multinode-20201013004012-797
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20201013004012-797-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
-- /stdout --

full logs

=== RUN TestMultiNode/serial/RestartMultiNode
multinode_test.go:226: (dbg) Run: ./minikube-darwin-amd64 start -p multinode-20201013004012-797 --wait=true -v=8 --alsologtostderr --driver=virtualbox
multinode_test.go:226: (dbg) Done: ./minikube-darwin-amd64 start -p multinode-20201013004012-797 --wait=true -v=8 --alsologtostderr --driver=virtualbox: (1m49.594815902s)
multinode_test.go:234: (dbg) Run: ./minikube-darwin-amd64 -p multinode-20201013004012-797 status --alsologtostderr
multinode_test.go:234: (dbg) Non-zero exit: ./minikube-darwin-amd64 -p multinode-20201013004012-797 status --alsologtostderr: exit status 2 (1.559119439s)
-- stdout --
multinode-20201013004012-797
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

multinode-20201013004012-797-m02
type: Worker
host: Running
kubelet: Stopped

-- /stdout --
** stderr **
W1013 00:51:57.134692 1261 root.go:252] Error reading config file at /Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/config/config.json: open /Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/config/config.json: no such file or directory
I1013 00:51:57.135240 1261 mustload.go:66] Loading cluster: multinode-20201013004012-797
I1013 00:51:57.135601 1261 status.go:222] checking status of multinode-20201013004012-797 ...
I1013 00:51:57.136032 1261 main.go:118] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo multinode-20201013004012-797 --machinereadable
I1013 00:51:57.427858 1261 main.go:118] libmachine: STDOUT:
{
name="multinode-20201013004012-797"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="46253d92-a114-48fc-a49f-85441870e39a"
CfgFile="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/multinode-20201013004012-797.vbox"
SnapFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/Snapshots"
LogFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/Logs"
hardwareuuid="46253d92-a114-48fc-a49f-85441870e39a"
memory=2200
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-10-13T00:49:23.912000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/boot2docker.iso"
"SATA-ImageUUID-0-0"="e6ad9e73-a847-4eff-b879-188ecd37b70a"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/disk.vmdk"
"SATA-ImageUUID-1-0"="d117e280-37e6-43e4-9bd5-bdf6016e47b8"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="08002739BDE5"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,49343,,22"
hostonlyadapter2="vboxnet0"
macaddress2="0800272F69DD"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/multinode-20201013004012-797.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1602550191845
GuestAdditionsFacility_VirtualBox System Service=50,1602550192573
GuestAdditionsFacility_Seamless Mode=0,1602550193045
GuestAdditionsFacility_Graphics Mode=0,1602550191835
}
I1013 00:51:57.427910 1261 main.go:118] libmachine: STDERR:
{
}
I1013 00:51:57.428002 1261 status.go:294] multinode-20201013004012-797 host status = "Running" (err=)
I1013 00:51:57.428016 1261 host.go:65] Checking if "multinode-20201013004012-797" exists ...
I1013 00:51:57.428354 1261 main.go:118] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo multinode-20201013004012-797 --machinereadable
I1013 00:51:57.572726 1261 main.go:118] libmachine: STDOUT:
{
name="multinode-20201013004012-797"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="46253d92-a114-48fc-a49f-85441870e39a"
CfgFile="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/multinode-20201013004012-797.vbox"
SnapFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/Snapshots"
LogFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/Logs"
hardwareuuid="46253d92-a114-48fc-a49f-85441870e39a"
memory=2200
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-10-13T00:49:23.912000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/boot2docker.iso"
"SATA-ImageUUID-0-0"="e6ad9e73-a847-4eff-b879-188ecd37b70a"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/disk.vmdk"
"SATA-ImageUUID-1-0"="d117e280-37e6-43e4-9bd5-bdf6016e47b8"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="08002739BDE5"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,49343,,22"
hostonlyadapter2="vboxnet0"
macaddress2="0800272F69DD"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/multinode-20201013004012-797.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1602550191845
GuestAdditionsFacility_VirtualBox System Service=50,1602550192573
GuestAdditionsFacility_Seamless Mode=0,1602550193045
GuestAdditionsFacility_Graphics Mode=0,1602550191835
}
I1013 00:51:57.572777 1261 main.go:118] libmachine: STDERR:
{
}
I1013 00:51:57.572916 1261 main.go:118] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo multinode-20201013004012-797 --machinereadable
I1013 00:51:57.718714 1261 main.go:118] libmachine: STDOUT:
{
name="multinode-20201013004012-797"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="46253d92-a114-48fc-a49f-85441870e39a"
CfgFile="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/multinode-20201013004012-797.vbox"
SnapFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/Snapshots"
LogFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/Logs"
hardwareuuid="46253d92-a114-48fc-a49f-85441870e39a"
memory=2200
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-10-13T00:49:23.912000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/boot2docker.iso"
"SATA-ImageUUID-0-0"="e6ad9e73-a847-4eff-b879-188ecd37b70a"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/disk.vmdk"
"SATA-ImageUUID-1-0"="d117e280-37e6-43e4-9bd5-bdf6016e47b8"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="08002739BDE5"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,49343,,22"
hostonlyadapter2="vboxnet0"
macaddress2="0800272F69DD"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/multinode-20201013004012-797/multinode-20201013004012-797.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1602550191845
GuestAdditionsFacility_VirtualBox System Service=50,1602550192573
GuestAdditionsFacility_Seamless Mode=0,1602550193045
GuestAdditionsFacility_Graphics Mode=0,1602550191835
}
I1013 00:51:57.718762 1261 main.go:118] libmachine: STDERR:
{
}
I1013 00:51:57.718994 1261 main.go:118] libmachine: Host-only MAC: 0800272f69dd

I1013 00:51:57.720222    1261 main.go:118] libmachine: Using SSH client type: native
I1013 00:51:57.720507    1261 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b1d70] 0x13b1d40 <nil>  [] 0s} 127.0.0.1 49343 <nil> <nil>}
I1013 00:51:57.720528    1261 main.go:118] libmachine: About to run SSH command:
ip addr show
I1013 00:51:57.819125    1261 main.go:118] libmachine: SSH cmd err, output: <nil>: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:39:bd:e5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86275sec preferred_lft 86275sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:2f:69:dd brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 475sec preferred_lft 475sec
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a5:dc:be:f3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: veth6797ad3@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ce:08:d2:c0:7d:62 brd ff:ff:ff:ff:ff:ff link-netnsid 0

I1013 00:51:57.819198    1261 main.go:118] libmachine: SSH returned: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:39:bd:e5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86275sec preferred_lft 86275sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:2f:69:dd brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 475sec preferred_lft 475sec
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a5:dc:be:f3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: veth6797ad3@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ce:08:d2:c0:7d:62 brd ff:ff:ff:ff:ff:ff link-netnsid 0

END SSH

I1013 00:51:57.819225    1261 host.go:65] Checking if "multinode-20201013004012-797" exists ...
I1013 00:51:57.819752    1261 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1013 00:51:57.819810    1261 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:49343 SSHKeyPath:/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797/id_rsa Username:docker}
I1013 00:51:57.870664    1261 system_pods.go:161] Checking kubelet status ...
I1013 00:51:57.871174    1261 ssh_runner.go:148] Run: systemctl --version
I1013 00:51:57.884057    1261 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
I1013 00:51:57.899357    1261 status.go:349] multinode-20201013004012-797 kubelet status = Running
I1013 00:51:57.900490    1261 kubeconfig.go:93] found "multinode-20201013004012-797" server: "https://192.168.99.100:8443"
I1013 00:51:57.900523    1261 api_server.go:146] Checking apiserver status ...
I1013 00:51:57.900742    1261 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1013 00:51:57.914799    1261 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/3733/cgroup
I1013 00:51:57.927633    1261 api_server.go:162] apiserver freezer: "6:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150697ed87843677726b2403a107ae35.slice/docker-206d406f4503dbfe169e581b3d5a81a01f37859f8287762396332c9398925667.scope"
I1013 00:51:57.927909    1261 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150697ed87843677726b2403a107ae35.slice/docker-206d406f4503dbfe169e581b3d5a81a01f37859f8287762396332c9398925667.scope/freezer.state
I1013 00:51:57.939139    1261 api_server.go:184] freezer state: "THAWED"
I1013 00:51:57.939193    1261 api_server.go:221] Checking apiserver healthz at https://192.168.99.100:8443/healthz ...
I1013 00:51:57.960295    1261 api_server.go:241] https://192.168.99.100:8443/healthz returned 200:
ok
I1013 00:51:57.960320    1261 status.go:370] multinode-20201013004012-797 apiserver status = Running (err=<nil>)
I1013 00:51:57.960329    1261 status.go:224] multinode-20201013004012-797 status: &{Name:multinode-20201013004012-797 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false}
I1013 00:51:57.960352    1261 status.go:222] checking status of multinode-20201013004012-797-m02 ...
I1013 00:51:57.960693    1261 main.go:118] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo multinode-20201013004012-797-m02 --machinereadable
I1013 00:51:58.121810    1261 main.go:118] libmachine: STDOUT:
{
name="multinode-20201013004012-797-m02"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="8e20a0be-c791-495c-80b0-b4c26bf22f65"
CfgFile="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02.vbox"
SnapFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/Snapshots"
LogFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/Logs"
hardwareuuid="8e20a0be-c791-495c-80b0-b4c26bf22f65"
memory=2200
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-10-13T00:50:29.613000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/boot2docker.iso"
"SATA-ImageUUID-0-0"="624b1971-8989-4287-812d-322f2e626487"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/disk.vmdk"
"SATA-ImageUUID-1-0"="2564e044-2aa3-41f6-a58f-c71f111e2a8d"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027D09819"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,49402,,22"
hostonlyadapter2="vboxnet0"
macaddress2="080027FF7E05"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1602550258542
GuestAdditionsFacility_VirtualBox System Service=50,1602550259361
GuestAdditionsFacility_Seamless Mode=0,1602550259995
GuestAdditionsFacility_Graphics Mode=0,1602550258541
}
I1013 00:51:58.121870    1261 main.go:118] libmachine: STDERR:
{
}
I1013 00:51:58.121966    1261 status.go:294] multinode-20201013004012-797-m02 host status = "Running" (err=<nil>)
I1013 00:51:58.121979    1261 host.go:65] Checking if "multinode-20201013004012-797-m02" exists ...
I1013 00:51:58.122343    1261 main.go:118] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo multinode-20201013004012-797-m02 --machinereadable
I1013 00:51:58.259555    1261 main.go:118] libmachine: STDOUT:
{
name="multinode-20201013004012-797-m02"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="8e20a0be-c791-495c-80b0-b4c26bf22f65"
CfgFile="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02.vbox"
SnapFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/Snapshots"
LogFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/Logs"
hardwareuuid="8e20a0be-c791-495c-80b0-b4c26bf22f65"
memory=2200
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-10-13T00:50:29.613000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/boot2docker.iso"
"SATA-ImageUUID-0-0"="624b1971-8989-4287-812d-322f2e626487"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/disk.vmdk"
"SATA-ImageUUID-1-0"="2564e044-2aa3-41f6-a58f-c71f111e2a8d"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027D09819"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,49402,,22"
hostonlyadapter2="vboxnet0"
macaddress2="080027FF7E05"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1602550258542
GuestAdditionsFacility_VirtualBox System Service=50,1602550259361
GuestAdditionsFacility_Seamless Mode=0,1602550259995
GuestAdditionsFacility_Graphics Mode=0,1602550258541
}
I1013 00:51:58.259604    1261 main.go:118] libmachine: STDERR:
{
}
I1013 00:51:58.259712    1261 main.go:118] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo multinode-20201013004012-797-m02 --machinereadable
I1013 00:51:58.442626    1261 main.go:118] libmachine: STDOUT:
{
name="multinode-20201013004012-797-m02"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="8e20a0be-c791-495c-80b0-b4c26bf22f65"
CfgFile="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02.vbox"
SnapFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/Snapshots"
LogFldr="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/Logs"
hardwareuuid="8e20a0be-c791-495c-80b0-b4c26bf22f65"
memory=2200
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-10-13T00:50:29.613000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/boot2docker.iso"
"SATA-ImageUUID-0-0"="624b1971-8989-4287-812d-322f2e626487"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/disk.vmdk"
"SATA-ImageUUID-1-0"="2564e044-2aa3-41f6-a58f-c71f111e2a8d"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027D09819"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,49402,,22"
hostonlyadapter2="vboxnet0"
macaddress2="080027FF7E05"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02/multinode-20201013004012-797-m02.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1602550258542
GuestAdditionsFacility_VirtualBox System Service=50,1602550259361
GuestAdditionsFacility_Seamless Mode=0,1602550259995
GuestAdditionsFacility_Graphics Mode=0,1602550258541
}
I1013 00:51:58.442678    1261 main.go:118] libmachine: STDERR:
{
}
I1013 00:51:58.442916    1261 main.go:118] libmachine: Host-only MAC: 080027ff7e05

I1013 00:51:58.443214    1261 main.go:118] libmachine: Using SSH client type: native
I1013 00:51:58.443514    1261 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b1d70] 0x13b1d40 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
I1013 00:51:58.443534    1261 main.go:118] libmachine: About to run SSH command:
ip addr show
I1013 00:51:58.547485    1261 main.go:118] libmachine: SSH cmd err, output: <nil>: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:d0:98:19 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86341sec preferred_lft 86341sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ff:7e:05 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.101/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 541sec preferred_lft 541sec
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4f:e0:11:3f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

I1013 00:51:58.547546    1261 main.go:118] libmachine: SSH returned: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:d0:98:19 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86341sec preferred_lft 86341sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ff:7e:05 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.101/24 brd 192.168.99.255 scope global dynamic eth1
       valid_lft 541sec preferred_lft 541sec
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4f:e0:11:3f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

END SSH

I1013 00:51:58.547570    1261 host.go:65] Checking if "multinode-20201013004012-797-m02" exists ...
I1013 00:51:58.548093    1261 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1013 00:51:58.548119    1261 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/Users/runner/work/minikube/minikube/minikube_binaries/testhome/.minikube/machines/multinode-20201013004012-797-m02/id_rsa Username:docker}
I1013 00:51:58.608774    1261 system_pods.go:161] Checking kubelet status ...
I1013 00:51:58.609055    1261 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
I1013 00:51:58.627254    1261 status.go:349] multinode-20201013004012-797-m02 kubelet status = Stopped
I1013 00:51:58.627274    1261 status.go:224] multinode-20201013004012-797-m02 status: &{Name:multinode-20201013004012-797-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true}

** /stderr **
multinode_test.go:236: failed to run minikube status. args "./minikube-darwin-amd64 -p multinode-20201013004012-797 status --alsologtostderr" : exit status 2
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:233: (dbg) Run: ./minikube-darwin-amd64 status --format={{.Host}} -p multinode-20201013004012-797 -n multinode-20201013004012-797
helpers_test.go:238: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:241: (dbg) Run: ./minikube-darwin-amd64 -p multinode-20201013004012-797 logs -n 25
helpers_test.go:241: (dbg) Done: ./minikube-darwin-amd64 -p multinode-20201013004012-797 logs -n 25: (2.926602818s)
helpers_test.go:246: TestMultiNode/serial/RestartMultiNode logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Tue 2020-10-13 00:49:52 UTC, end at Tue 2020-10-13 00:52:00 UTC. --
* Oct 13 00:49:58 multinode-20201013004012-797 dockerd[2442]: time="2020-10-13T00:49:58.946589253Z" level=info msg="Loading containers: done."
* Oct 13 00:49:59 multinode-20201013004012-797 dockerd[2442]: time="2020-10-13T00:49:59.000300839Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
* Oct 13 00:49:59 multinode-20201013004012-797 dockerd[2442]: time="2020-10-13T00:49:59.000968997Z" level=info msg="Daemon has completed initialization"
* Oct 13 00:49:59 multinode-20201013004012-797 dockerd[2442]: time="2020-10-13T00:49:59.046530619Z" level=info msg="API listen on /var/run/docker.sock"
* Oct 13 00:49:59 multinode-20201013004012-797 systemd[1]: Started Docker Application Container Engine.
* Oct 13 00:49:59 multinode-20201013004012-797 dockerd[2442]: time="2020-10-13T00:49:59.046790503Z" level=info msg="API listen on [::]:2376"
* Oct 13 00:50:09 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:09.018141686Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/373fbf160efc560bb83a7a59b64899b70a73832ba747890fb57dd01f02071812/shim.sock" debug=false pid=3391
* Oct 13 00:50:09 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:09.103411009Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f9779eb4d81b08a252e188e09748629e076e3ba60e72998a4bf2e10fd7cf3cb3/shim.sock" debug=false pid=3413
* Oct 13 00:50:09 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:09.183078896Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0dee6cf8bb22380eb15546bd74c94c9cdda4cc506b0a1c6f65f6e7c106c8ff73/shim.sock" debug=false pid=3433
* Oct 13 00:50:09 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:09.212749084Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6eb44902ab67c86bde9e60def0cd0404240deec7b734fff66edcb8c7e2e484d1/shim.sock" debug=false pid=3443
* Oct 13 00:50:10 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:10.183471319Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/457fdd719254d43c7d4fc11fe59236789e566c98dad3c7f058f7957a0f72b635/shim.sock" debug=false pid=3632
* Oct 13 00:50:10 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:10.184440507Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/206d406f4503dbfe169e581b3d5a81a01f37859f8287762396332c9398925667/shim.sock" debug=false pid=3634
* Oct 13 00:50:10 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:10.299761345Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e36cfe2a7e6008712f62a3405d31795935efb159e74bc845d6893500d66dbe9f/shim.sock" debug=false pid=3666
* Oct 13 00:50:10 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:10.466548129Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6eb4df51563b66a23aca0c06184d15d14e9b979527dc8beb580c84d43d67f66c/shim.sock" debug=false pid=3713
* Oct 13 00:50:22 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:22.870987798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7f705719a040a3db7bb300a94b5f2b171137a26cf55ad270090a8498df1f22c7/shim.sock" debug=false pid=4192
* Oct 13 00:50:22 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:22.910650046Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7ecd3576fde249743c20a09f645e79db9bb395e2348a01ba38b1853bba3225b4/shim.sock" debug=false pid=4195
* Oct 13 00:50:23 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:23.396089155Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d819ed709fbedb9903fefcf2c40b1b2ab331f95bb9a2b941c42ed3b79f5eea20/shim.sock" debug=false pid=4232
* Oct 13 00:50:23 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:23.435257199Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d4f8390091c541c260963666b85ca9425409cafc512643890c333e409317641a/shim.sock" debug=false pid=4233
* Oct 13 00:50:25 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:25.302027902Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b998d9bae75585218b0e9d0c12e75cdb0e0582962fb01ed060d8e07bacf9f456/shim.sock" debug=false pid=4378
* Oct 13 00:50:25 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:25.367948185Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2337f0de819a78b4115a2d385c8151759e547158daf09283a9552ef96a3d86df/shim.sock" debug=false pid=4395
* Oct 13 00:50:25 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:25.403789396Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/04c110c3f71b73020647f7ac419be2372c7f8b0f6923edc310dbfa6eb9800877/shim.sock" debug=false pid=4409
* Oct 13 00:50:26 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:26.338643528Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b269e8e38615b9ed7d3fd88b1397bb1f987c50e69d9f3f3ce01970a0f2672518/shim.sock" debug=false pid=4494
* Oct 13 00:50:56 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:50:56.130469238Z" level=info msg="shim reaped" id=04c110c3f71b73020647f7ac419be2372c7f8b0f6923edc310dbfa6eb9800877
* Oct 13 00:50:56 multinode-20201013004012-797 dockerd[2442]: time="2020-10-13T00:50:56.142167867Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Oct 13 00:51:09 multinode-20201013004012-797 dockerd[2450]: time="2020-10-13T00:51:09.249139381Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b6e65664dc14c04ef4467a21f896765a729ed73f8dbdd2798b0a2b2dfe7e6db9/shim.sock" debug=false pid=4877
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* b6e65664dc14c bad58561c4be7 51 seconds ago Running storage-provisioner 2 d819ed709fbed
* b269e8e38615b 2186a1a396deb About a minute ago Running kindnet-cni 1 7f705719a040a
* 04c110c3f71b7 bad58561c4be7 About a minute ago Exited storage-provisioner 1 d819ed709fbed
* 2337f0de819a7 d373dd5a8593a About a minute ago Running kube-proxy 1 d4f8390091c54
* b998d9bae7558 bfe3a36ebd252 About a minute ago Running coredns 1 7ecd3576fde24
* e36cfe2a7e600 2f32d66b884f8 About a minute ago Running kube-scheduler 1 6eb44902ab67c
* 6eb4df51563b6 0369cf4303ffd About a minute ago Running etcd 1 0dee6cf8bb223
* 206d406f4503d 607331163122e About a minute ago Running kube-apiserver 1 373fbf160efc5
* 457fdd719254d 8603821e1a7a5 About a minute ago Running kube-controller-manager 0 f9779eb4d81b0
* ea8cca41467da kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98 6 minutes ago Exited kindnet-cni 0 487bd2babb9b5
* e3de1a4888492 bfe3a36ebd252 8 minutes ago Exited coredns 0 768e66d62fb0f
* 63d46dce0d5d1 d373dd5a8593a 9 minutes ago Exited kube-proxy 0 1e560fd68229e
* b5542628f95e6 0369cf4303ffd 9 minutes ago Exited etcd 0 4b047ebbd6176
* a43f46dd809f0 2f32d66b884f8 9 minutes ago Exited kube-scheduler 0 cce367e9185be
* fa3ec8a56f0c4 607331163122e 9 minutes ago Exited kube-apiserver 0 0797295f19e96
*
* ==> coredns [b998d9bae755] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* I1013 00:50:56.026674 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-10-13 00:50:26.023606506 +0000 UTC m=+0.346802293) (total time: 30.002996326s):
* Trace[2019727887]: [30.002996326s] [30.002996326s] END
* I1013 00:50:56.027511 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-10-13 00:50:26.018927358 +0000 UTC m=+0.342123174) (total time: 30.008561762s):
* Trace[1427131847]: [30.008561762s] [30.008561762s] END
* E1013 00:50:56.029564 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
* E1013 00:50:56.029599 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
* I1013 00:50:56.027590 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-10-13 00:50:26.021652084 +0000 UTC m=+0.344847897) (total time: 30.00592894s):
* Trace[939984059]: [30.00592894s] [30.00592894s] END
* E1013 00:50:56.031261 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> coredns [e3de1a488849] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* [INFO] SIGTERM: Shutting down servers then terminating
* [INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: multinode-20201013004012-797
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20201013004012-797
* kubernetes.io/os=linux
* minikube.k8s.io/commit=76248792204086dd63d9b8d39726eb443b28e8e5
* minikube.k8s.io/name=multinode-20201013004012-797
* minikube.k8s.io/updated_at=2020_10_13T00_42_50_0700
* minikube.k8s.io/version=v1.14.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Tue, 13 Oct 2020 00:42:46 +0000
* Taints:
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20201013004012-797
* AcquireTime:
* RenewTime: Tue, 13 Oct 2020 00:51:59 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Tue, 13 Oct 2020 00:50:20 +0000 Tue, 13 Oct 2020 00:42:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Tue, 13 Oct 2020 00:50:20 +0000 Tue, 13 Oct 2020 00:42:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Tue, 13 Oct 2020 00:50:20 +0000 Tue, 13 Oct 2020 00:42:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Tue, 13 Oct 2020 00:50:20 +0000 Tue, 13 Oct 2020 00:43:02 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.99.100
* Hostname: multinode-20201013004012-797
* Capacity:
* cpu: 2
* ephemeral-storage: 17784752Ki
* hugepages-2Mi: 0
* memory: 2186204Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 17784752Ki
* hugepages-2Mi: 0
* memory: 2186204Ki
* pods: 110
* System Info:
* Machine ID: 2f4bf33154f048d0976a51f27679f4b1
* System UUID: 923d2546-14a1-fc48-a49f-85441870e39a
* Boot ID: bae4599a-6615-4f8f-99a3-324fe055b743
* Kernel Version: 4.19.114
* OS Image: Buildroot 2020.02.6
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.12
* Kubelet Version: v1.19.2
* Kube-Proxy Version: v1.19.2
* PodCIDR: 10.244.0.0/24
* PodCIDRs: 10.244.0.0/24
* Non-terminated Pods: (8 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-f9fd979d6-q5nlf 100m (5%) 0 (0%) 70Mi (3%) 170Mi (7%) 9m3s
* kube-system etcd-multinode-20201013004012-797 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m8s
* kube-system kindnet-n8k7v 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 6m43s
* kube-system kube-apiserver-multinode-20201013004012-797 250m (12%) 0 (0%) 0 (0%) 0 (0%) 9m8s
* kube-system kube-controller-manager-multinode-20201013004012-797 200m (10%) 0 (0%) 0 (0%) 0 (0%) 98s
* kube-system kube-proxy-lw74s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m3s
* kube-system kube-scheduler-multinode-20201013004012-797 100m (5%) 0 (0%) 0 (0%) 0 (0%) 9m8s
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m1s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 750m (37%) 100m (5%)
* memory 120Mi (5%) 220Mi (10%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 9m40s (x6 over 9m41s) kubelet Node multinode-20201013004012-797 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 9m40s (x5 over 9m41s) kubelet Node multinode-20201013004012-797 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 9m40s (x5 over 9m41s) kubelet Node multinode-20201013004012-797 status is now: NodeHasSufficientPID
* Normal Starting 9m9s kubelet Starting kubelet.
* Normal NodeHasSufficientMemory 9m8s kubelet Node multinode-20201013004012-797 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 9m8s kubelet Node multinode-20201013004012-797 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 9m8s kubelet Node multinode-20201013004012-797 status is now: NodeHasSufficientPID
* Normal NodeNotReady 9m8s kubelet Node multinode-20201013004012-797 status is now: NodeNotReady
* Normal NodeAllocatableEnforced 9m8s kubelet Updated Node Allocatable limit across pods
* Normal Starting 9m1s kube-proxy Starting kube-proxy.
* Normal NodeReady 8m58s kubelet Node multinode-20201013004012-797 status is now: NodeReady
* Normal Starting 113s kubelet Starting kubelet.
* Normal NodeHasSufficientMemory 112s (x8 over 112s) kubelet Node multinode-20201013004012-797 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 112s (x8 over 112s) kubelet Node multinode-20201013004012-797 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 112s (x7 over 112s) kubelet Node multinode-20201013004012-797 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 112s kubelet Updated Node Allocatable limit across pods
* Normal Starting 94s kube-proxy Starting kube-proxy.
*
*
* Name: multinode-20201013004012-797-m02
* Roles:
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=multinode-20201013004012-797-m02
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Tue, 13 Oct 2020 00:45:13 +0000
* Taints: node.kubernetes.io/unreachable:NoExecute
* node.kubernetes.io/unreachable:NoSchedule
* Unschedulable: false
* Lease:
* HolderIdentity: multinode-20201013004012-797-m02
* AcquireTime:
* RenewTime: Tue, 13 Oct 2020 00:49:13 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure Unknown Tue, 13 Oct 2020 00:45:45 +0000 Tue, 13 Oct 2020 00:51:14 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* DiskPressure Unknown Tue, 13 Oct 2020 00:45:45 +0000 Tue, 13 Oct 2020 00:51:14 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* PIDPressure Unknown Tue, 13 Oct 2020 00:45:45 +0000 Tue, 13 Oct 2020 00:51:14 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* Ready Unknown Tue, 13 Oct 2020 00:45:45 +0000 Tue, 13 Oct 2020 00:51:14 +0000 NodeStatusUnknown Kubelet stopped posting node status.
* Addresses:
* InternalIP: 192.168.99.101
* Hostname: multinode-20201013004012-797-m02
* Capacity:
* cpu: 2
* ephemeral-storage: 1967584Ki
* hugepages-2Mi: 0
* memory: 2186204Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 1967584Ki
* hugepages-2Mi: 0
* memory: 2186204Ki
* pods: 110
* System Info:
* Machine ID: 9e0a835a171c43baa92791719ca53e98
* System UUID: bea0208e-91c7-5c49-80b0-b4c26bf22f65
* Boot ID: a9e45ebb-38b6-4cd3-9795-50d7b5f5b71a
* Kernel Version: 4.19.114
* OS Image: Buildroot 2020.02.6
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.12
* Kubelet Version: v1.19.2
* Kube-Proxy Version: v1.19.2
* PodCIDR: 10.244.1.0/24
* PodCIDRs: 10.244.1.0/24
* Non-terminated Pods: (2 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system kindnet-k74dt 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 6m43s
* kube-system kube-proxy-79xs5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m47s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (5%) 100m (5%)
* memory 50Mi (2%) 50Mi (2%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 6m48s kubelet Starting kubelet.
* Normal NodeHasSufficientMemory 6m47s (x2 over 6m47s) kubelet Node multinode-20201013004012-797-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 6m47s (x2 over 6m47s) kubelet Node multinode-20201013004012-797-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 6m47s (x2 over 6m47s) kubelet Node multinode-20201013004012-797-m02 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 6m47s kubelet Updated Node Allocatable limit across pods
* Normal Starting 6m45s kube-proxy Starting kube-proxy.
* Normal NodeReady 6m36s kubelet Node multinode-20201013004012-797-m02 status is now: NodeReady
*
* ==> dmesg <==
* [ +5.040834] hpet1: lost 322 rtc interrupts
* [ +5.022713] hpet1: lost 320 rtc interrupts
* [ +5.023524] hpet_rtc_timer_reinit: 39 callbacks suppressed
* [ +0.000001] hpet1: lost 321 rtc interrupts
* [ +1.617195] hrtimer: interrupt took 4308319 ns
* [ +3.406569] hpet_rtc_timer_reinit: 3 callbacks suppressed
* [ +0.000002] hpet1: lost 319 rtc interrupts
* [ +5.021454] hpet1: lost 320 rtc interrupts
* [ +5.025971] hpet1: lost 319 rtc interrupts
* [ +4.996068] hpet1: lost 319 rtc interrupts
* [ +5.027267] hpet1: lost 320 rtc interrupts
* [ +5.023330] hpet1: lost 319 rtc interrupts
* [Oct13 00:51] hpet1: lost 321 rtc interrupts
* [ +5.022862] hpet1: lost 320 rtc interrupts
* [ +5.025100] hpet1: lost 319 rtc interrupts
* [ +5.022793] hpet1: lost 321 rtc interrupts
* [ +5.029586] hpet1: lost 320 rtc interrupts
* [ +5.019944] hpet1: lost 319 rtc interrupts
* [ +5.026189] hpet1: lost 319 rtc interrupts
* [ +4.998137] hpet1: lost 319 rtc interrupts
* [ +5.021959] hpet1: lost 320 rtc interrupts
* [ +5.019950] hpet1: lost 319 rtc interrupts
* [ +5.028043] hpet1: lost 320 rtc interrupts
* [ +1.575069] NFSD: Unable to end grace period: -110
* [ +3.451346] hpet1: lost 320 rtc interrupts
*
* ==> etcd [6eb4df51563b] <==
* 2020-10-13 00:50:12.074808 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
* 2020-10-13 00:50:12.074971 I | embed: listening for metrics on http://127.0.0.1:2381
* 2020-10-13 00:50:12.075022 I | embed: listening for peers on 192.168.99.100:2380
* raft2020/10/13 00:50:13 INFO: 7feb3ee23ce5b4a7 is starting a new election at term 2
* raft2020/10/13 00:50:13 INFO: 7feb3ee23ce5b4a7 became candidate at term 3
* raft2020/10/13 00:50:13 INFO: 7feb3ee23ce5b4a7 received MsgVoteResp from 7feb3ee23ce5b4a7 at term 3
* raft2020/10/13 00:50:13 INFO: 7feb3ee23ce5b4a7 became leader at term 3
* raft2020/10/13 00:50:13 INFO: raft.node: 7feb3ee23ce5b4a7 elected leader 7feb3ee23ce5b4a7 at term 3
* 2020-10-13 00:50:13.937327 I | etcdserver: published {Name:multinode-20201013004012-797 ClientURLs:[https://192.168.99.100:2379]} to cluster a9449303b0ccd8c0
* 2020-10-13 00:50:14.045133 I | embed: ready to serve client requests
* 2020-10-13 00:50:14.067993 I | embed: serving client requests on 192.168.99.100:2379
* 2020-10-13 00:50:14.068048 I | embed: ready to serve client requests
* 2020-10-13 00:50:14.073452 I | embed: serving client requests on 127.0.0.1:2379
* 2020-10-13 00:50:23.614558 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/token-cleaner" " with result "range_response_count:1 size:236" took too long (124.600072ms) to execute
* 2020-10-13 00:50:25.651016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:50:29.038909 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:50:39.038384 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:50:49.038269 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:50:59.039790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:51:09.039600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:51:19.039638 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:51:29.038331 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:51:39.039707 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:51:49.039644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:51:59.038616 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> etcd [b5542628f95e] <==
* 2020-10-13 00:45:36.354502 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:45:46.353365 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:45:56.354548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:46:06.354886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:46:16.352164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:46:26.353804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:46:36.354579 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:46:46.354990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:46:56.354637 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:47:06.352871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:47:16.355556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:47:26.353797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:47:36.353456 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:47:46.353324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:47:56.354629 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:48:06.428990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:48:16.354420 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:48:26.353517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:48:36.406958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:48:46.381296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:48:56.352460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:49:06.354445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-10-13 00:49:15.367292 N | pkg/osutil: received terminated signal, shutting down...
* WARNING: 2020/10/13 00:49:15 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* 2020-10-13 00:49:15.424838 I | etcdserver: skipped leadership transfer for single voting member cluster
*
* ==> kernel <==
* 00:52:01 up 2 min, 0 users, load average: 1.67, 1.03, 0.42
* Linux multinode-20201013004012-797 4.19.114 #1 SMP Mon Oct 12 16:32:58 PDT 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2020.02.6"
*
* ==> kube-apiserver [206d406f4503] <==
* I1013 00:50:19.569314 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I1013 00:50:19.588599 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I1013 00:50:19.642344 1 cache.go:39] Caches are synced for autoregister controller
* I1013 00:50:19.842006 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
* I1013 00:50:19.842049 1 cache.go:39] Caches are synced for AvailableConditionController controller
* I1013 00:50:19.843338 1 shared_informer.go:247] Caches are synced for crd-autoregister
* I1013 00:50:20.426716 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I1013 00:50:20.426829 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I1013 00:50:20.455185 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I1013 00:50:21.789575 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I1013 00:50:22.306999 1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I1013 00:50:22.381826 1 controller.go:606] quota admission added evaluator for: deployments.apps
* I1013 00:50:22.630044 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I1013 00:50:22.675630 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* I1013 00:50:35.042769 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I1013 00:50:35.106224 1 controller.go:606] quota admission added evaluator for: endpoints
* I1013 00:50:45.236003 1 client.go:360] parsed scheme: "passthrough"
* I1013 00:50:45.236885 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I1013 00:50:45.237424 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1013 00:51:17.311373 1 client.go:360] parsed scheme: "passthrough"
* I1013 00:51:17.311503 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I1013 00:51:17.311738 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1013 00:51:56.909357 1 client.go:360] parsed scheme: "passthrough"
* I1013 00:51:56.909479 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I1013 00:51:56.909489 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-apiserver [fa3ec8a56f0c] <==
* W1013 00:49:16.401919 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.402303 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.402519 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.403270 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.403373 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.406187 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.406549 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.406696 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.406748 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.408464 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.408547 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.408583 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.408810 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.408894 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.409015 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.409140 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.409226 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.409281 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.410158 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.410284 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.410560 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.410597 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.410632 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.410764 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W1013 00:49:16.411630 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
*
* ==> kube-controller-manager [457fdd719254] <==
* I1013 00:50:34.894153 1 shared_informer.go:247] Caches are synced for job
* I1013 00:50:34.894269 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
* I1013 00:50:34.894311 1 shared_informer.go:247] Caches are synced for attach detach
* I1013 00:50:34.895826 1 shared_informer.go:247] Caches are synced for HPA
* I1013 00:50:34.898212 1 shared_informer.go:247] Caches are synced for persistent volume
* I1013 00:50:34.898889 1 shared_informer.go:247] Caches are synced for PVC protection
* I1013 00:50:34.900543 1 range_allocator.go:373] Set node multinode-20201013004012-797 PodCIDR to [10.244.0.0/24]
* I1013 00:50:34.901464 1 shared_informer.go:247] Caches are synced for stateful set
* I1013 00:50:34.908720 1 shared_informer.go:247] Caches are synced for bootstrap_signer
* I1013 00:50:35.039863 1 shared_informer.go:247] Caches are synced for endpoint_slice
* I1013 00:50:35.046336 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
* I1013 00:50:35.079922 1 shared_informer.go:247] Caches are synced for resource quota
* I1013 00:50:35.103927 1 shared_informer.go:247] Caches are synced for resource quota
* I1013 00:50:35.103954 1 shared_informer.go:247] Caches are synced for endpoint
* I1013 00:50:35.152055 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
* I1013 00:50:35.389770 1 shared_informer.go:247] Caches are synced for garbage collector
* I1013 00:50:35.389791 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I1013 00:50:35.453824 1 shared_informer.go:247] Caches are synced for garbage collector
* I1013 00:51:14.891297 1 event.go:291] "Event occurred" object="multinode-20201013004012-797-m02" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20201013004012-797-m02 status is now: NodeNotReady"
* I1013 00:51:14.927400 1 event.go:291] "Event occurred" object="kube-system/kindnet-k74dt" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1013 00:51:14.959591 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-79xs5" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1013 00:51:34.838210 1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-z8f78
* I1013 00:51:34.859162 1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-z8f78 succeeded
* I1013 00:51:34.859447 1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-lbvd4
* I1013 00:51:34.877601 1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-lbvd4 succeeded
*
* ==> kube-proxy [2337f0de819a] <==
* I1013 00:50:26.385465 1 node.go:136] Successfully retrieved node IP: 192.168.99.100
* I1013 00:50:26.386334 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.99.100), assume IPv4 operation
* W1013 00:50:26.604506 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I1013 00:50:26.605247 1 server_others.go:186] Using iptables Proxier.
* W1013 00:50:26.605874 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I1013 00:50:26.605882 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I1013 00:50:26.609301 1 server.go:650] Version: v1.19.2
* I1013 00:50:26.610489 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1013 00:50:26.610614 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I1013 00:50:26.611243 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I1013 00:50:26.614333 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1013 00:50:26.615360 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1013 00:50:26.619709 1 config.go:315] Starting service config controller
* I1013 00:50:26.620329 1 shared_informer.go:240] Waiting for caches to sync for service config
* I1013 00:50:26.622597 1 config.go:224] Starting endpoint slice config controller
* I1013 00:50:26.623483 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1013 00:50:26.721345 1 shared_informer.go:247] Caches are synced for service config
* I1013 00:50:26.725309 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [63d46dce0d5d] <==
* I1013 00:42:59.755427 1 node.go:136] Successfully retrieved node IP: 192.168.99.100
* I1013 00:42:59.755991 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.99.100), assume IPv4 operation
* W1013 00:42:59.867944 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I1013 00:42:59.870086 1 server_others.go:186] Using iptables Proxier.
* W1013 00:42:59.870433 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I1013 00:42:59.870491 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I1013 00:42:59.872512 1 server.go:650] Version: v1.19.2
* I1013 00:42:59.873912 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1013 00:42:59.874045 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I1013 00:42:59.874476 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I1013 00:42:59.883612 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1013 00:42:59.883662 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1013 00:42:59.885191 1 config.go:315] Starting service config controller
* I1013 00:42:59.885551 1 shared_informer.go:240] Waiting for caches to sync for service config
* I1013 00:42:59.889449 1 config.go:224] Starting endpoint slice config controller
* I1013 00:42:59.892150 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1013 00:42:59.992036 1 shared_informer.go:247] Caches are synced for service config
* I1013 00:42:59.992475 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [a43f46dd809f] <==
* E1013 00:42:46.682030 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1013 00:42:46.682838 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1013 00:42:46.683469 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1013 00:42:46.683696 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1013 00:42:46.684008 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1013 00:42:46.684559 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1013 00:42:46.684755 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1013 00:42:46.685121 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1013 00:42:46.685343 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1013 00:42:46.686869 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1013 00:42:46.687048 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1013 00:42:47.555389 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1013 00:42:47.640511 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1013 00:42:47.704935 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1013 00:42:47.714779 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1013 00:42:47.809036 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1013 00:42:47.821538 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1013 00:42:47.843859 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1013 00:42:47.897420 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1013 00:42:47.907509 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1013 00:42:47.971492 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1013 00:42:48.036228 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1013 00:42:48.048931 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1013 00:42:48.120324 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I1013 00:42:50.161750 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [e36cfe2a7e60] <==
* I1013 00:50:12.114180 1 registry.go:173] Registering SelectorSpread plugin
* I1013 00:50:12.114223 1 registry.go:173] Registering SelectorSpread plugin
* I1013 00:50:12.617547 1 serving.go:331] Generated self-signed cert in-memory
* W1013 00:50:19.507172 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W1013 00:50:19.507656 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W1013 00:50:19.508155 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
* W1013 00:50:19.508225 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I1013 00:50:19.593636 1 registry.go:173] Registering SelectorSpread plugin
* I1013 00:50:19.593857 1 registry.go:173] Registering SelectorSpread plugin
* I1013 00:50:19.613359 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I1013 00:50:19.613906 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1013 00:50:19.614162 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1013 00:50:19.615000 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I1013 00:50:19.915488 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Tue 2020-10-13 00:49:52 UTC, end at Tue 2020-10-13 00:52:02 UTC. --
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177190 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/96b052c3-9b1c-40d4-82a8-6e20cdb7075c-kube-proxy") pod "kube-proxy-lw74s" (UID: "96b052c3-9b1c-40d4-82a8-6e20cdb7075c")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177206 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9d2dl" (UniqueName: "kubernetes.io/secret/96b052c3-9b1c-40d4-82a8-6e20cdb7075c-kube-proxy-token-9d2dl") pod "kube-proxy-lw74s" (UID: "96b052c3-9b1c-40d4-82a8-6e20cdb7075c")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177221 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/ec283a1c-c77a-4c32-9f6f-ccec2b027877-cni-cfg") pod "kindnet-n8k7v" (UID: "ec283a1c-c77a-4c32-9f6f-ccec2b027877")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177236 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/ec283a1c-c77a-4c32-9f6f-ccec2b027877-xtables-lock") pod "kindnet-n8k7v" (UID: "ec283a1c-c77a-4c32-9f6f-ccec2b027877")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177251 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/96b052c3-9b1c-40d4-82a8-6e20cdb7075c-xtables-lock") pod "kube-proxy-lw74s" (UID: "96b052c3-9b1c-40d4-82a8-6e20cdb7075c")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177266 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/a0fd6cee-5f58-40cc-85d7-6530ca8479ec-tmp") pod "storage-provisioner" (UID: "a0fd6cee-5f58-40cc-85d7-6530ca8479ec")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177281 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-2zbq6" (UniqueName: "kubernetes.io/secret/ec283a1c-c77a-4c32-9f6f-ccec2b027877-kindnet-token-2zbq6") pod "kindnet-n8k7v" (UID: "ec283a1c-c77a-4c32-9f6f-ccec2b027877")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177297 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7436236a-5e8d-47f1-b3ee-33798051a89c-config-volume") pod "coredns-f9fd979d6-q5nlf" (UID: "7436236a-5e8d-47f1-b3ee-33798051a89c")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177313 2866 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-sdfgq" (UniqueName: "kubernetes.io/secret/a0fd6cee-5f58-40cc-85d7-6530ca8479ec-storage-provisioner-token-sdfgq") pod "storage-provisioner" (UID: "a0fd6cee-5f58-40cc-85d7-6530ca8479ec")
* Oct 13 00:50:22 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:22.177321 2866 reconciler.go:157] Reconciler: start to sync state
* Oct 13 00:50:25 multinode-20201013004012-797 kubelet[2866]: W1013 00:50:25.019660 2866 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-q5nlf through plugin: invalid network status for
* Oct 13 00:50:25 multinode-20201013004012-797 kubelet[2866]: W1013 00:50:25.080287 2866 pod_container_deletor.go:79] Container "7ecd3576fde249743c20a09f645e79db9bb395e2348a01ba38b1853bba3225b4" not found in pod's containers
* Oct 13 00:50:25 multinode-20201013004012-797 kubelet[2866]: W1013 00:50:25.116694 2866 pod_container_deletor.go:79] Container "d4f8390091c541c260963666b85ca9425409cafc512643890c333e409317641a" not found in pod's containers
* Oct 13 00:50:26 multinode-20201013004012-797 kubelet[2866]: W1013 00:50:26.073450 2866 pod_container_deletor.go:79] Container "7f705719a040a3db7bb300a94b5f2b171137a26cf55ad270090a8498df1f22c7" not found in pod's containers
* Oct 13 00:50:26 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:26.160312 2866 kubelet.go:1559] Trying to delete pod kube-controller-manager-multinode-20201013004012-797_kube-system 4f8f8c13-de0e-44fa-8d76-0f004d939793
* Oct 13 00:50:26 multinode-20201013004012-797 kubelet[2866]: W1013 00:50:26.209862 2866 pod_container_deletor.go:79] Container "d819ed709fbedb9903fefcf2c40b1b2ab331f95bb9a2b941c42ed3b79f5eea20" not found in pod's containers
* Oct 13 00:50:27 multinode-20201013004012-797 kubelet[2866]: W1013 00:50:27.241989 2866 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-q5nlf through plugin: invalid network status for
* Oct 13 00:50:34 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:34.951848 2866 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.244.0.0/24
* Oct 13 00:50:34 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:34.952362 2866 docker_service.go:357] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
* Oct 13 00:50:34 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:34.952463 2866 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
* Oct 13 00:50:56 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:56.689226 2866 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 99bcf7eb9ff01f379058a419e17ea9be20e77561c16cc5a8f84f780b3decfee8
* Oct 13 00:50:56 multinode-20201013004012-797 kubelet[2866]: I1013 00:50:56.691451 2866 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 04c110c3f71b73020647f7ac419be2372c7f8b0f6923edc310dbfa6eb9800877
* Oct 13 00:50:56 multinode-20201013004012-797 kubelet[2866]: E1013 00:50:56.691727 2866 pod_workers.go:191] Error syncing pod a0fd6cee-5f58-40cc-85d7-6530ca8479ec ("storage-provisioner_kube-system(a0fd6cee-5f58-40cc-85d7-6530ca8479ec)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a0fd6cee-5f58-40cc-85d7-6530ca8479ec)"
* Oct 13 00:51:08 multinode-20201013004012-797 kubelet[2866]: I1013 00:51:08.038060 2866 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 16719dbe717b3247b8ce142617cc7d131aa62a5c04772a19653534dfade135ae
* Oct 13 00:51:09 multinode-20201013004012-797 kubelet[2866]: I1013 00:51:09.100864 2866 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 04c110c3f71b73020647f7ac419be2372c7f8b0f6923edc310dbfa6eb9800877
*
* ==> storage-provisioner [04c110c3f71b] <==
* F1013 00:50:56.034768 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> storage-provisioner [b6e65664dc14] <==
* I1013 00:51:09.447500 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
* I1013 00:51:26.892546 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1013 00:51:26.894087 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20201013004012-797_5a957897-0a5d-45e5-a705-e95212791f61!
* I1013 00:51:26.894256 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd0c730b-b6e1-4b0b-8ab2-4532082cf3f8", APIVersion:"v1", ResourceVersion:"975", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20201013004012-797_5a957897-0a5d-45e5-a705-e95212791f61 became leader
* I1013 00:51:26.994529 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_multinode-20201013004012-797_5a957897-0a5d-45e5-a705-e95212791f61!
-- /stdout --
helpers_test.go:248: (dbg) Run: ./minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-20201013004012-797 -n multinode-20201013004012-797
helpers_test.go:255: (dbg) Run: kubectl --context multinode-20201013004012-797 get po -o=jsonpath={.items[
].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods:
helpers_test.go:263: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:266: (dbg) Run: kubectl --context multinode-20201013004012-797 describe pod
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context multinode-20201013004012-797 describe pod : exit status 1 (62.199835ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:268: kubectl --context multinode-20201013004012-797 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/RestartMultiNode (160.80s)

@medyagh medyagh added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Oct 13, 2020
@medyagh
Copy link
Member Author

medyagh commented Oct 13, 2020

@sharifelgamal
I verified manually that right after a starting a stopped multinode, the kublet is stopped but waiting 1 minute or so finally it come up.

medya@~/workspace/minikube (stopped_kubelet) $ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

minikube-m02
type: Worker
host: Running
kubelet: Stopped

on docker driver on mac os

on another run it did NOT come up eventually and here is the logs

medya@~/workspace/minikube (stopped_kubelet) $ ./out/minikube ssh -n m02
docker@minikube-m02:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Tue 2020-10-13 01:47:21 UTC; 197ms ago
       Docs: http://kubernetes.io/docs/
    Process: 8177 ExecStart=/var/lib/minikube/binaries/v1.19.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap
-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube-m02 --kubec
onfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3 (code=exited, status=255/EXCEPTION)
   Main PID: 8177 (code=exited, status=255/EXCEPTION)

@medyagh medyagh changed the title FailedTest: TestMultiNode/serial/RestartMultiNode Kubelet stays stopped Multinode, ensure kublet is up after re-start Oct 13, 2020
@medyagh
Copy link
Member Author

medyagh commented Oct 13, 2020

here is journalctl logs for kubelet

Oct 13 01:48:18 minikube-m02 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Oct 13 01:48:18 minikube-m02 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Oct 13 01:48:18 minikube-m02 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 790.
Oct 13 01:48:18 minikube-m02 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Oct 13 01:48:18 minikube-m02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Oct 13 01:48:19 minikube-m02 kubelet[8934]: F1013 01:48:19.069350    8934 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Oct 13 01:48:19 minikube-m02 kubelet[8934]: goroutine 1 [running]:
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012c001, 0xc00013e840, 0xfb, 0x14d)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6cf6140, 0xc000000003, 0x0, 0x0, 0xc0005e11f0, 0x6b49c19, 0x9, 0xc6, 0xc0003a9a00)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x6cf6140, 0x3, 0x0, 0x0, 0x1, 0xc000b3fd00, 0x1, 0x1)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:703
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1436
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc0000fa000, 0xc000122080, 0x6, 0x6)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:198 +0xa05
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000fa000, 0xc000122080, 0x6, 0x6, 0xc0000fa000, 0xc000122080)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000fa000, 0x163d6a193db96464, 0x6cf5c60, 0x409b05)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
Oct 13 01:48:19 minikube-m02 kubelet[8934]: main.main()
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
Oct 13 01:48:19 minikube-m02 kubelet[8934]: goroutine 19 [chan receive]:
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x6cf6140)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
Oct 13 01:48:19 minikube-m02 kubelet[8934]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
Oct 13 01:48:19 minikube-m02 kubelet[8934]: goroutine 92 [select]:
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4794628, 0x4becee0, 0xc0005fa960, 0x1, 0xc0001100c0)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4794628, 0x12a05f200, 0x0, 0xc0004c4001, 0xc0001100c0)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4794628, 0x12a05f200)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
Oct 13 01:48:19 minikube-m02 kubelet[8934]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
Oct 13 01:48:19 minikube-m02 kubelet[8934]: goroutine 23 [select]:
Oct 13 01:48:19 minikube-m02 kubelet[8934]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0000a4af0)
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
Oct 13 01:48:19 minikube-m02 kubelet[8934]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
Oct 13 01:48:19 minikube-m02 kubelet[8934]:         /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
Oct 13 01:48:19 minikube-m02 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Oct 13 01:48:19 minikube-m02 systemd[1]: kubelet.service: Failed with result 'exit-code'.
docker@minikube-m02:~$ 

@medyagh medyagh changed the title Multinode, ensure kublet is up after re-start FailedTest: TestMultiNode/serial/RestartMultiNode kubelet not running after restart Oct 13, 2020
@medyagh
Copy link
Member Author

medyagh commented Oct 13, 2020

there is no kubelet config on the worker node !

$ sudo ls /var/lib/kubelet/config
ls: cannot access '/var/lib/kubelet/config': No such file or directory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant