-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic GC times out #3436
Comments
The context that's timing out is in log.Info("Watermark exceeded. Starting repo GC...")
// 1 minute is sufficient for ~1GB unlink() blocks each of 100kb in SSD
_ctx, cancel := context.WithTimeout(ctx, time.Duration(gc.SlackGB)*time.Minute)
defer cancel()
if err := GarbageCollect(gc.Node, _ctx); err != nil {
return err
} StorageMax is 60G, and StorageGC is 54G, so SlackGB is 6G, and thus the timeout hits after 6 minutes. The time between the "starting gc" log line and the context timeout was only about 1.5 minutes when I witnessed this live. @whyrusleeping @kevina @Kubuxu what would you say about just upping the timeout? HDDs and Flash will be much much slower than what's anticipated here, and apparantly even DigitalOcean's shared SSDs are too slow. |
Or maybe just make it configurable. |
That metric just seems so wrong? why does the slack space get to dictate how long this has to run? Why does this even need a timeout? that seems odd... |
Okay I'll look into removing the timeout, or making it something absurdly high |
Version information: 0.4.5-dev-e4be1b2
Type: Bug
Priority: P2
Description:
The automatic GC times out. StorageMax is 54G, the repo is 60G big, so the timeout for GC should be something like 6 minutes.
From jupiter:
Note how the context times out after after less than 1.5m already.
The text was updated successfully, but these errors were encountered: