-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServiceWorker lifetime and respondWith() with a user-constructed ReadableStream #882
Comments
At the face-to-face today we decided the code providing the js stream must use waitUntil(). You are right, though, there will probably be a window between when waitUntil is resolved and the browser consumes the js buffers. It seems unlikely, though, that draining the buffers would last longer than the grace timer. Perhaps we should say the service worker should be kept alive long enough to consume the stream. That makes the default behavior to hold the worker alive. It could be an equivalent optimization, though, to let the SW die if a c++ stream is being consumed. |
F2F meeting notes: https://lists.w3.org/Archives/Public/public-webapps/2016AprJun/0043.html |
This sounds pretty good to me, if people are OK with it... |
I guess that works as long as there is no way for js to produce a c++ stream that is really sourced from js. For example, write to a file via a js source stream and return the file c++ stream to respondWith(). To be honest, this scares me a little. Overall, though, I think requiring waitUntil() might be better since it covers all cases. We don't need to add magical keep-alive logic if there is a new way to transfer a stream from js-to-c++. We just need a note that when the extended promises settle the service worker should try to live until all js buffers enqueued in transferring streams are consumed. |
Pre F2F notes: Don't think there's anything to discuss here. I think we're happy with developers using |
F2F:
|
Here's what I'd like to see here: self.addEventListener('fetch', event => {
event.respondWith(
fetch(event.request)
);
}); self.addEventListener('fetch', event => {
event.respondWith(
caches.match(event.request)
);
}); self.addEventListener('fetch', event => {
event.respondWith((async () => {
const response = await fetch(event.request);
return new Response(response.body, {
headers: response.headers
});
})());
}); In the above cases, the service worker should be able to terminate once the response has been provided, before the resulting stream has been fully consumed by the browser, without breaking the stream. So if the response is a 3 hour movie, the SW does not need to stay alive while the user watches the 3 hour movie. Whereas: self.addEventListener('fetch', event => {
const responsePromises = [
caches.match('/shell-start.inc'),
fetch(event.request.url + '.inc'),
caches.match('/shell-end.inc')
];
const {writable, readable} = new TransformStream();
(async () => {
for await (const response of responsePromises) {
await response.body.pipeTo(writable, {preventClose: true});
}
writable.close();
})();
event.respondWith(
new Response(readable, {
headers: {'Content-Type': 'text/html'}
})
);
}); …this may not work, as the SW may terminate before all of the To avoid the above, you'd need to do: self.addEventListener('fetch', event => {
const responsePromises = [
caches.match('/shell-start.inc'),
fetch(event.request.url + '.inc'),
caches.match('/shell-end.inc')
];
const {writable, readable} = new TransformStream();
event.waitUntil((async () => {
for await (const response of responsePromises) {
await response.body.pipeTo(writable, {preventClose: true});
}
writable.close();
})());
event.respondWith(
new Response(readable, {
headers: {'Content-Type': 'text/html'}
})
);
}); Now the I'm curious about this case: event.waitUntil((async () => {
for (let i, response; response = await responsePromises[i]; i++) {
const isLast = i == responsePromises.length - 1;
if (isLast) {
response.body.pipeTo(writable);
}
else {
await response.body.pipeTo(writable, {preventClose: true});
}
}
})()); Here Should the SW be able to close, allowing the stream to complete successfully? @domenic? |
@jakearchibald, let me refer to the examples in your last comment as "fetched streams case", "JS-generated streams case", "JS-generated streams case with waitUntil", and "pipeTo-stream after waitUntil promise resolve case". I was confused with "fetched streams case" in our chat yesterday. (@jakearchibald's last comment started to clarify that discussion.) The steps that enqueue chunks to fetchEvent's potential response in respondWith(r) run in parallel. So it doesn't hold SW open until the whole body is handled. And in Handle Fetch, SW basically waits until the fetchEvent's wait to respond flag is unset. That means SW stays up until the fetchEvent's potential response's reference is copied to the internal variable that will be returned to fetch. Hence, I don't think we have a problem with this case. Also, for "JS-generated streams case", "JS-generated streams case with waitUntil" is the pattern that we made our consensus on in the last f2f. So, I think we can close this issue when "pipeTo-stream after waitUntil promise resolve case" is clarified. |
I'm not sure GCed is the right word here, since the reference is being held on to (by the implicit promise chain created by the for-await-of). I'm also not sure if the right approach here is to, on termination of the service worker, reach into the objects owned and created by the web developer and move them into a new state (i.e. error them). Maybe the level this should be handled at is when we "pipe" across the boundary to the main thread. (https://w3c.github.io/ServiceWorker/#dom-fetchevent-respondwith step 8.3.2.5.) The ReadableStream created on the main thread is owned by the user agent, who could decide to error it. In the service worker thread, either the ReadableStream should be canceled (using the reader that's being read from), or the thread should just shut down entirely, destroying all objects, not running any more JavaScript code, and forcibly releasing any resources like network socket handles. I guess you would spec that as a cancel anyway. Then the question becomes whether you want to error the main-thread ReadableStream, or just close it prematurely. I agree this should act as if the server simply cut off its response and reset the connection, but I'm not 100% sure this would result in an error today. But I trust you if you are sure that is the behavior.
It seems like as written the service worker should be able to close, since you didn't wait for the pipeTo to finish. But whether or not the stream will complete successfully is then up to the browser, I think. If the browser decides to destroy the service worker thread, the behind-the-scenes stream (and the identity transform it is passing through) will be killed, so the process of piping it to the main thread will be interrupted. |
F2F: what should happen if the SW terminates before the stream is complete. Error or early close? |
F2F: If the stream is "JS created", we should consider keeping the SW alive after a fetch event until the stream has been consumed. We talked about this in #882 (comment), but after observing usage, it seems like we really need this. @domenic can you post your code example? |
|
Most cases are taken care of with waitUntil(), but we should still solve/conclude this. We'll address this in the next milestone. |
Don't know if there is any Firefox developer in here. but fyi, FF v102 started to support transferable ReadableStream and that broke quite many downloads for many fellow user of my StreamSaver.js lib. I deliberately didn't do any postMessage hack to keep the service worker alive. cuz chrome could just take a ReadableStream from the main thread, pass it to the service worker and respond with that ReadableStream and then terminate the service worker cuz it was not in use. the service worker didn't try to read or enqueue any data from the stream inside of the service worker. it just forwared the transfered readable stream from the main thread -> service worker -> eventually i expected that FF would follow suit. but it didn't. the transferable stream detection took over how my script behaved differently. FF would transfer the stream and no postMessages where then sent to the service worker to keep it alive. b/c i tough it would not be necessary. but it was 😞 now all browser use the postMessage hack to keep the service worker alive. with that said, can we shine any light on how the service worker should behave if "ServiceWorker lifetime and respondWith() with a user-constructed ReadableStream" should behave? I would say:
|
Branched from yutakahirano/fetch-with-streams#63 (comment)
respondWith() with a user-constructed ReadableStream
When a user-constructed
ReadableStream
is being consumed byrespondWith()
,respondWith()
doesn't know what's happening inside theReadableStream
. TheReadableStream
interface of the instance facingrespondWith()
doesn't allow either scraping out its contents or knowingReadableStreamDefaultController.close()
is invoked (note that aReadableStream
doesn't always haveReadableStreamDefaultController
behind it).Until all the chunks are read out from the
ReadableStream
, it's holding some JS objects representing the underlying source and chunks in its queue. So, we want to keep the ServiceWorker alive until they're drained.Also, note that only
respondWith()
can observe theReadableStream
's state whenrespondWith()
is consuming it. It could work but is not good that we tell the instance to wait for to SW likeevt.waitUntilClosed(aStream)
as Ben suggested at yutakahirano/fetch-with-streams#63 (comment).evt.respondWith(response, { waitForBody: true });
(yutakahirano/fetch-with-streams#63 (comment)) by Ben looks better.There we could have a timeout to abort receiving chunks even when
waitForBody
is specified so that badly behaving stream doesn't prolong SW lifetime forever.respondWith() with a ReadableStream constructed by Fetch API
I agree with Ben's point at yutakahirano/fetch-with-streams#63 (comment). This is also related to the discussion how we realize efficient
pipeTo()
algorithm. I.e. for certain operations, streams would behave like a simple identifier of data source / destination like file descriptor.I'll try to standardize all the interfaces for these issues by resuming the
pipeTo()
discussion threads.The text was updated successfully, but these errors were encountered: