-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow sidecars to terminate gracefully #1131
Comments
/kind bug |
There was some discussion of this problem on the original design proposal. I think the main issue raised was that you can't guarantee an arbitrary sidecar has a @pierretasci Do you ideas for ways this could be implemented that would support arbitrary containers? |
Yeah, I read through the discussion on the original proposal and I think it makes sense. I do think that it wouldn't be particularly harmful to send a If the container doesn't have shutdown logic, no harm done it seems. At the worst, it would introduce an XX second delay to completing each task. I am of course open to other ideas but I do think stateful sidecars (logging is another use case that comes to mind) will want a shutdown mechanism. |
I totally agree, I'm just not entirely sure yet how this would be implemented if the container doesn't include a |
I've been skimming through the k8s codebase looking for the spot where the container termination flow is implemented and it looks as though the precise implementation of the grace period is delegated to the container runtime. |
Another alternative here would be to introduce an optional contract for sidecars: If you want to receive a SIGTERM / shutdown warning then your sidecar must contain a kill binary on This would provide enough of a stopgap until the sidecars KEP lands I think. |
Yeah, that is sort-of what I was getting at above. If there is a kill binary, we provide graceful shutdown, otherwise, the timeout and hot-swap seems more than a reasonable contract. I suspect the timeout will actually be enough for most sidecars but having a dedicated way to shut down if it is required will help cover edge cases better. |
OK cool so then the steps to implement are something like:
Sounds doable! I'll try to find some time this week or next to implement. |
There’s no need for a ‘kill’ binary in the container. It’s possible to send SIGTERM to the container process 1: https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/ |
Also consider new sidecar lifecycle in k8s future version (maybe 1.19): kubernetes/enhancements#753 |
Many thanks @Mohjive . You mention that there's no need for a kill binary in the Pod's container but then you forgot to demonstrate how to send SIGTERM without it 😆 . Assume I'm a Linux noob. How do I send SIGTERM to a process without kill? I can't run Thanks for the pointer to the sidecar lifecycle KEP. It's definitely on our radar. We'd be very happy to remove the nop image hack and replace it with that feature once it's merged and k8s have committed to its support. |
Docker/k8s isn’t Linux, so you have to apply other way of thinking when using containers. Containers doesn’t run an operating system, so no init process or daemons (it’s possible to have an init process in a container, but that’s a special use case). I’m fairly new to k8s, but deleting a pod will send SIGTERM to all containers in that pod and after a grace period it will send SIGKILL unless the container has exited. There might be other ways as well, but someone else has to explain that. :) When a container receives a SIGTERM it will be received by process 1 in the container. Process 1 is the one defined by ‘CMD’ argument in dockerfile, the one command specified in ‘kubectl run’ or other way, e.g. in your crd. See https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html for more details. |
True we could delete the pod. We do want to keep it around for its logs though. At least until the user deletes the TaskRun that spawned it. Or if we have some other established mechanism for persisting the logs outside of Kubernetes such as through the Tekton Result proposal. From my perspective the sidecar lifecycle KEP seems like the ideal solution. I'd much prefer the platform exposed a mechanism for this rather than us layering in workarounds. |
Ok, so I did some quick research and what I found is that it should be possible to run docker cli (if the k8s cluster uses docker daemon shim) or containerd cli (https://github.com/projectatomic/containerd/blob/master/docs/cli.md) directly on the executor, so if there exists a Tekton container in the job pod it could monitor the job container and use the right cli to stop the sidecar containers, when the job has exited. |
So, as @sbwsg said, killing/deleting the Pod should be considered as "not possible", at least for now. We already do that when cancelling a
We can't rely on a specific runtime cli, as we have absolutely no guarantee that the runtime is the one used by the Kubernetes it is deployed on. It also means we wouldn't support some runtime, and it breaks absraction… Tekton doesn't have to know what runtime Kubernetes uses. What "could" be done though, is to do more or less the same as what we are doing for our steps, aka wrapping the
The trick for the wrapper, is to know when to to "gently kill" the sidecar process. We could do the same as we do with the |
Currently, the sidecar is not behaving like a sidecar should. So, if there was no Nop image hack in place, my sidecar would just complete its task and the container status will be terminated. Isn't it better to leave it up to the user on how and when he wants to terminate his sidecar? If it's absolutely necessary for Tekton to take this responsibility to kill the sidecar, can we at least have an option to disable it? Like inside the sidecar definition, a flag like: Which will disable the replacing mechanism with Nop image. Some more insight why I think the newly introduced "Finally" wasn't good enough for me to do the same: Which forces us to modify certain dockerfiles Thanks! |
Any thoughts @vdemeester / @sbwsg I can contribute to this fix if you agree |
Hey, no apology necessary, it sometimes takes us a while to give new designs a proper look. I am not opposed to the idea of introducing a mechanism to disable the "nop image swap" and let sidecars terminate themselves. There is no workaround that I can think of which would work today without code changes. The reason this is so tricky is because of the requirement for the Sidecar to run to completion even if a Step failed. If this wasn't a constraint then you could program your last Step to wait for a signal from the Sidecar before the Step and Task end. But unfortunately Steps after the failed one will simply not run so the Task will end immediately, killing the Sidecar no matter what. Looking at the code as it currently stands So the best next step would be to write a short design doc and present it to the API Working Group to propose the idea. If you can't make it to a WG feel free to write the doc and I'd be happy to communicate it out. |
Thank you so much @sbwsg ! I'll follow your advice and ping you back if needed |
@shahnewazrifat when you added a lifecyle / preStop to your sidecar what did you see? Did it just not get called at all when we do the nop image swap. |
@skaegi Precisely. It never got called. Here's the explanation from Kubernetes docs. Looks like it only gets triggered under certain conditions:
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler. A more detailed description of the termination behavior can be found in Termination of Pods. Interesting part being:
|
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale Open TEP: tektoncd/community#191 |
Kubernetes is introducing support for sidecar containers in alpha in the 1.28 release (kubernetes/enhancements#753). Once this feature is more stable, it could help us address this issue by replacing our logic that terminates sidecars using the nop image. |
Expected Behavior
A sidecar might need to complete any pending work before being torn down (eg, uploading metadata).
Actual Behavior
Sidecars are replaced with a
nop
image as soon as a TaskRun completes preventing any cleanup work.Steps to Reproduce the Problem
We would expect all of the output to be uploaded as the task runs but whatever chunk isn't uploaded before the sidecar is torn down is presumably lost.
Additional Info
There are ways around this but it seems to me that the very least, a sidecar should receive the
kill
signal so it can handle shutdown however it needs to. A timeout can always been inserted to ensure it doesn't hang forever.The text was updated successfully, but these errors were encountered: