-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci-kubernetes-e2e-gce-serial: broken test run #37409
Comments
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/48/ Multiple broken tests: Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/49/ Multiple broken tests: Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/50/ Multiple broken tests: Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/51/ Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/52/ Multiple broken tests: Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/53/ Multiple broken tests: Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/54/ Multiple broken tests: Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
|
The serial test suites ci-kubernetes-e2e-gce-serial and ci-kubernetes-e2e-gci-gce-serial started to fail on November 23, 2016 (lots of tests are red due to The changes that went in during that period are:
It looks like the culript was:
The tests started failing as soon as the "HA Master" tests started running as part of the test suite. So PR #37356 needs to be reverted. However, we are in code freeze until Monday morning. So I will prepare the revert and check it in first thing on Monday (or on Saturday or Sunday if everything looks ok otherwise). |
Looks like @jszczepkowski is ahead of me: PR #37441 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/55/ Multiple broken tests: Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/56/ Multiple broken tests: Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/57/ Run so broken it didn't make JUnit output! |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/58/ Multiple broken tests: Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/59/ Multiple broken tests: Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/60/ Multiple broken tests: Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/61/ Multiple broken tests: Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/62/ Multiple broken tests: Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/63/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/64/ Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/65/ Multiple broken tests: Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/66/ Run so broken it didn't make JUnit output! |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/67/ Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 |
[FLAKE-PING] @ixdy @jszczepkowski This flaky-test issue would love to have more attention. |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/68/ Multiple broken tests: Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/69/ Multiple broken tests: Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/70/ Multiple broken tests: Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/71/ Run so broken it didn't make JUnit output! |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/72/ Multiple broken tests: Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/73/ Multiple broken tests: Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/74/ Multiple broken tests: Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/75/ Multiple broken tests: Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/76/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28853 #31585 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/77/ Multiple broken tests: Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/78/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #27397 #27917 #31592 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/79/ Multiple broken tests: Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}
Issues about this test specifically: #31428 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #32945 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/80/ Multiple broken tests: Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #27655 #33876 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
Issues about this test specifically: #30078 #30142 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
Issues about this test specifically: #35279 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
Issues about this test specifically: #29512 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
Issues about this test specifically: #30441 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}
Issues about this test specifically: #31407 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
Issues about this test specifically: #36950 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #29516 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
Issues about this test specifically: #27502 #28722 #32037 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}
Issues about this test specifically: #37479 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/81/ Multiple broken tests: Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #31918 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
Issues about this test specifically: #36794 Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27406 #27669 #29770 #32642 Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #29514 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 Failed: [k8s.io] HA-master survive addition/removal replicas same zone [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531 Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #28091 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
Issues about this test specifically: #37259 Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
Issues about this test specifically: #33883 Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}
Issues about this test specifically: #36457 Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
Issues about this test specifically: #27957 Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
Issues about this test specifically: #29816 #30018 #33974 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
Issues about this test specifically: #34223 Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #28257 #29159 #29449 #32447 #37508 Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
Issues about this test specifically: #29444 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
Issues about this test specifically: #27233 #36204 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
Issues about this test specifically: #26784 #28384 #31935 #33023 Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Issues about this test specifically: #26744 #26929 Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
Issues about this test specifically: #30317 #31591 #37163 Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
Issues about this test specifically: #27470 #30156 #34304 Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #36914 Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}
Issues about this test specifically: #37373 Failed: [k8s.io] HA-master survive addition/removal replicas different zones [Serial][Disruptive] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
Issues about this test specifically: #28019 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
Issues about this test specifically: #27394 #27660 #28079 #28768 #35871 Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
Issues about this test specifically: #28657 #30519 #33878 Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
Issues about this test specifically: #35277 Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
|
[FLAKE-PING] @ixdy @jszczepkowski This flaky-test issue would love to have more attention. |
#37441 was merged. Latest run is green: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/82/ Resolving issue. |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/47/
Run so broken it didn't make JUnit output!
The text was updated successfully, but these errors were encountered: