You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 14, 2024. It is now read-only.
The way the timelock server does the scan to find the list of namespaces could read through more data than we expected. We do a SELECT DISTINCT (namespace) FROM paxosLog where there could be O(200) namespaces but the log itself can have millions of entries - although there is an index, it still needs to page through 100000s of copies of the same string there.
The solution here is probably to manually maintain a table of the namespaces and use database triggers or something to keep that in sync - basically a manual index - though doing a suitable migration requires a bit of thought!
This endpoint is only really used by timelock-migrate, and it's not usually so bad of a problem in that while it's expensive we can run it on one node at a time (so this is losing HA, not generally causing outages)... unless we need it when the cluster is degraded, as in the ticket.
The text was updated successfully, but these errors were encountered:
jeremyk-91
changed the title
getNamespaces() on a busy timelock server is costly
[PDS-332497] getNamespaces() on a busy timelock server is costly
Feb 13, 2023
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The way the timelock server does the scan to find the list of namespaces could read through more data than we expected. We do a
SELECT DISTINCT (namespace) FROM paxosLog
where there could be O(200) namespaces but the log itself can have millions of entries - although there is an index, it still needs to page through 100000s of copies of the same string there.The solution here is probably to manually maintain a table of the namespaces and use database triggers or something to keep that in sync - basically a manual index - though doing a suitable migration requires a bit of thought!
This endpoint is only really used by
timelock-migrate
, and it's not usually so bad of a problem in that while it's expensive we can run it on one node at a time (so this is losing HA, not generally causing outages)... unless we need it when the cluster is degraded, as in the ticket.The text was updated successfully, but these errors were encountered: