-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.net.SocketTimeoutException for logWatch() #5202
Comments
I also tried setting the requestTimeout to 0 in the client builder but this didn't solve my problem either. |
I think the issue is that all of the other clients default their timeouts to 0/unlimited, but okhttp does not. So it's still enforcing it's default of 10 seconds. We'll need to change the OkHttpClientFactory to account for this. The workaround in the meantime is to do it on your own:
For okhttp this does affect http watches as well. #4949 also removed where the calling code was setting the readTimeout to 0 for websocket requests - but that does not actually seem to matter to okhttp as the read/write timeout seem to only affect regular http requests. |
Thanks for the fast reply, as the OkHttp dependency is not included with implementation scope, I had to add it first with with But this seemed to have worked! :) |
The new behavior (#4911) is not working out very well. I'd really like a way to ensure that long-running operations such as builds, tail logs, and so on have an ensured timeout of 0. The fixing PR seems to be failing on the e2e tests, probably the no timeout assumption is having collateral unwanted side effects. |
That's a bit premature.
You can also entertain setting it explicitly to 0 for http watches and log watches, but after this fix those cases are covered as well. As mentioned on the issue that was unnecessary for the websocket operations.
That was related to the thread pool changes in the tests, not the timeout. |
OK, good to know. I hadn't check the PR changes and was worried that just by changing the timeouts everything else went south.
We probably want to do this too, just to make sure that these requests enforce a no-timeout behavior.
Log watch is HTTP. I was actually dealing with this same issue downstream and about to report it when I saw this one 😅 |
We'll need to release a 6.7.1 to fix these problems though |
Describe the bug
In v6.6.2 the following code used to work fine:
Since version 6.7.0 I get an Exception. The Exception always occurs pretty fast, even if the underlying pod is not terminated yet (I used a sleep 3600 to simulate it). The pod is streaming continuous logs.
I also see them being consumed .
Fabric8 Kubernetes Client version
6.7.0
Steps to reproduce
Expected behavior
Having the old behavior. Up on Pod termination stream should be gracefully closed.
Runtime
other (please specify in additional context)
Kubernetes API Server version
1.25.3@latest
Environment
Linux
Fabric8 Kubernetes Client Logs
Additional context
I'm using k3d
The text was updated successfully, but these errors were encountered: