-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limiter for parallel prehandling #8184
Rate limiter for parallel prehandling #8184
Conversation
Signed-off-by: CalvinNeo <[email protected]>
Signed-off-by: CalvinNeo <[email protected]>
Signed-off-by: CalvinNeo <[email protected]>
Signed-off-by: CalvinNeo <[email protected]>
Co-authored-by: JaySon <[email protected]>
Signed-off-by: CalvinNeo <[email protected]>
…tics into rate-limit-parallel-prehandle
/run-integration-test |
Signed-off-by: CalvinNeo <[email protected]>
/run-integration-test |
/run-unit-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM
Co-authored-by: JaySon <[email protected]>
Co-authored-by: JaySon <[email protected]>
Co-authored-by: JaySon <[email protected]>
Co-authored-by: JaySon <[email protected]>
Co-authored-by: JaySon <[email protected]>
Signed-off-by: CalvinNeo <[email protected]>
…tics into rate-limit-parallel-prehandle
Signed-off-by: CalvinNeo <[email protected]>
/run-all-tests |
size_t total_concurrency = 0; | ||
if (proxy_config.valid) | ||
{ | ||
total_concurrency = proxy_config.snap_handle_pool_size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the default value of snap_handle_pool_size
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be too big when falling through to std::thread::hardware_concurrency()
below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea here is in raftstore-v2 scene, many table has only one snapshots, so if there the only region of this table is serving, TiFlash can't actually serve anything.
However, I think I can adopt the previous strategy, since it is less aggresive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/run-integration-test The tidb-ci tests returned with exit code 1, let's try rerun it
|
Signed-off-by: CalvinNeo <[email protected]>
Signed-off-by: CalvinNeo <[email protected]>
/run-integration-test |
Signed-off-by: CalvinNeo <[email protected]>
/run-all-tests |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: JaySon-Huang, JinheLin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[LGTM Timeline notifier]Timeline:
|
/run-all-tests |
What problem does this PR solve?
Issue Number: close #8081
Problem Summary:
In the previous PR, we introduced parallel prehandling for a single big region. However, it's believed there are also some cases that has a few, but more than 1 ongoing big snapshot. In these cases, the second snapshot can't benefit from parallel prehandling.
What is changed and how it works?
The idea is that we introduced a parallel limit, which equals to
snap-handle-pool-size
. Every subtask of a parallel prehandling task take a parallel unit. If a prehandling task takes more parallel units than what's left, the task will sleep, until some other prehandling subtask is finished.Note there is much time wasted before the first snapshot is arrived.
Check List
Tests
Side effects
Documentation
Release note