-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ui: api calls return max result size exceeded
error
#96184
Comments
Another approach could be still showing the partial results when the max result size is exceeded. If I remember, when we get this error we still get results as part of the payload. We can show the message that there are more results that weren't returned along with the ones we got. |
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This PR creates a new function to format the return of the api calls, so when is a max size error it doesn't throw an error, but still pass that info so we can display a warning on the pages. This commit updates the Insights Workload > Statement page with the new behaviour. Following PRs will update other usages of the sql api. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Statement Insights when we reach a "max size exceed" error from the sql api.
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This PR creates a new function to format the return of the api calls, so when is a max size error it doesn't throw an error, but still pass that info so we can display a warning on the pages. This commit updates the Insights Workload > Statement page with the new behaviour. Following PRs will update other usages of the sql api. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Statement Insights when we reach a "max size exceed" error from the sql api.
97153: ui: show data when max size reached r=maryliag a=maryliag Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This PR creates a new function to format the return of the api calls, so when is a max size error it doesn't throw an error, but still pass that info so we can display a warning on the pages. This commit updates the Insights Workload > Statement page with the new behaviour. Following PRs will update other usages of the sql api. <img width="1252" alt="Screenshot 2023-02-16 at 11 06 23 AM" src="https://user-images.githubusercontent.com/1017486/219422728-fe21a5ab-a6e1-4c27-94d2-d1ec482cac4c.png"> https://www.loom.com/share/9d9a24c486ce466ab355e1040af095c9 Part Of: #96184 Release note (ui change): Still show data on the console (with a warning) for Statement Insights when we reach a "max size exceed" error from the sql api. 97229: go.mod: bump Pebble to 9b1142a5070e r=RaduBerinde a=RaduBerinde 9b1142a5 db: add ErrorIfNotPristine option fbbc39d1 sstable: add readahead for value blocks Release note: Epic: none Co-authored-by: maryliag <[email protected]> Co-authored-by: Radu Berinde <[email protected]>
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Workload > Transaction page with the new behaviour. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Transaction Insights when we reach a "max size exceed" error from the sql api.
97277: ui: show data for txn insights when max size reached r=maryliag a=maryliag Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Workload > Transaction page with the new behaviour. https://www.loom.com/share/73ea263ab31f43d89b9412b7ba9d8710 <img width="1521" alt="Screenshot 2023-02-16 at 7 32 09 PM" src="https://user-images.githubusercontent.com/1017486/219519311-64bf85a0-cdb3-483d-a1dd-e1772cad4472.png"> <img width="1363" alt="Screenshot 2023-02-16 at 7 31 47 PM" src="https://user-images.githubusercontent.com/1017486/219519308-4f0b4215-fd94-4153-9f41-2d1f2ed1f00a.png"> <img width="1187" alt="Screenshot 2023-02-16 at 7 31 39 PM" src="https://user-images.githubusercontent.com/1017486/219519303-d28dc7d7-5867-496a-add6-405b7f53b312.png"> Part Of: #96184 Release note (ui change): Still show data on the console (with a warning) for Transaction Insights when we reach a "max size exceed" error from the sql api. Co-authored-by: maryliag <[email protected]>
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Schema page with the new behaviour. The query to retrieve the drop recommendations was returning all indexes and the ui was doing the filtering. This commit also changes the query to only return the indexes with a drop recommendation, resulting in a lot less data being sent with the sql api and causing less size limit reached. Then the ui just needs to decided the type of drop. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Schema Insights when we reach a "max size exceed" error from the sql api.
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Schema page with the new behaviour. The query to retrieve the drop recommendations was returning all indexes and the ui was doing the filtering. This commit also changes the query to only return the indexes with a drop recommendation, resulting in a lot less data being sent with the sql api and causing less size limit reached. Then the ui just needs to decided the type of drop. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Schema Insights when we reach a "max size exceed" error from the sql api.
97312: ui: show data when max size reached for schema insight r=maryliag a=maryliag Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Schema page with the new behaviour. The query to retrieve the drop recommendations was returning all indexes and the ui was doing the filtering. This commit also changes the query to only return the indexes with a drop recommendation, resulting in a lot less data being sent with the sql api and causing less size limit reached. Then the ui just needs to decided the type of drop. <img width="1241" alt="Screenshot 2023-02-17 at 6 24 36 PM" src="https://user-images.githubusercontent.com/1017486/219816418-35df6bff-ffe7-4339-813c-5436d2300650.png"> Part Of: #96184 Release note (ui change): Still show data on the console (with a warning) for Schema Insights when we reach a "max size exceed" error from the sql api. 97330: logictest: add flag to disable workmem randomization r=DrewKimball a=DrewKimball Logic tests randomize the `sql.distsql.temp_storage.workmem` setting, which limits the number of bytes a SQL processor can use before spilling to disk. This can cause significant variation in runtimes for a logic test. For example, running the `udf` logic test with the local configuration on my machine ranges anywhere from 10s to >100s. This is painful when development requires adding to a large logic test. This patch adds a flag, `default-workmem` to the `dev testlogic` command, which disables randomization of the workmem setting. On my machine, the local `udf` logic test consistently takes 10s when run with this flag. The flag can be used when manually running tests to speed development time. Example usage with `dev testlogic`: ``` ./dev testlogic base --files udf --ignore-cache --default-workmem ``` Example usage with `dev test`: ``` ./dev test pkg/sql/logictest/tests/local -f TestLogic_udf --ignore-cache --test-args=-default-workmem ``` Epic: CRDB-20535 Release note: None 97390: sql: disallow UDF in SET DEFAULT and SET ON UPDATE r=chengxiong-ruan a=chengxiong-ruan Epic: None. Release note (sql change): previously users were able to sneak in UDF usage from tables with `SET DEFAULT` and `SET ON UPDATE` even they are disallowed from `CREATE TABLE` and `ADD COLUMN`. This patch disallows those two cases from `ALTER TABLE ALTER COLUMN`. Co-authored-by: maryliag <[email protected]> Co-authored-by: Drew Kimball <[email protected]> Co-authored-by: Chengxiong Ruan <[email protected]>
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This PR creates a new function to format the return of the api calls, so when is a max size error it doesn't throw an error, but still pass that info so we can display a warning on the pages. This commit updates the Insights Workload > Statement page with the new behaviour. Following PRs will update other usages of the sql api. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Statement Insights when we reach a "max size exceed" error from the sql api.
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Workload > Transaction page with the new behaviour. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Transaction Insights when we reach a "max size exceed" error from the sql api.
Previously, when the sql api returned a max size reached error, we were just showing the error, but not the data that was also being returned. This commit updates the Insights Schema page with the new behaviour. The query to retrieve the drop recommendations was returning all indexes and the ui was doing the filtering. This commit also changes the query to only return the indexes with a drop recommendation, resulting in a lot less data being sent with the sql api and causing less size limit reached. Then the ui just needs to decided the type of drop. Part Of: cockroachdb#96184 Release note (ui change): Still show data on the console (with a warning) for Schema Insights when we reach a "max size exceed" error from the sql api.
max result size exceeded
errormax result size exceeded
error
Show events that were returned when we reach the max size limit of the sql-api. Part Of cockroachdb#96184 Release note: None
97442: kvserver: record ScanStats once per BatchRequest r=yuzefovich a=yuzefovich Previously, we would record a `kvpb.ScanStats` object into the trace for each evaluated Get, Scan, and ReverseScan command. This was suboptimal for two reasons: - this required an allocation of that `kvpb.ScanStats` object - this required propagating all of these separate objects via the tracing infrastructure which might make it so that the tracing limits are reached resulting in some objects being dropped. This commit, instead, changes the ScanStats to be tracked at the BatchRequest level, thus, we only need to record a single object per BatchRequest. This results in reduced granularity, but that is still sufficient for the SQL needs which simply aggregates all `kvpb.ScanStats` from a single SQL processor into one object. As a result, the tpch_concurrency metric averaged over 20 runs increased from 76.75 to 84.75. Additionally, this commit makes it so that we track the number of Gets, Scans, and ReverseScans actually evaluated as part of the BatchResponse. This information is plumbed through a couple of protos but is not exposed in any SQL Observability virtual tables. Still, due to having it in the protos will include this information into the trace. Informs: #64906. Fixes: #71351. Release note: None 97646: go.mod: bump Pebble to bc4c9afe47a5 r=sumeerbhola a=jbowens ``` bc4c9afe db: use objstorage provider for consistency check 7c833595 db: move CheckConsistency 470c6d49 build: add make targets for asan and msan; add ci jobs b76bb914 ci: Continuously build for NetBSD 456fd724 vfs: Add support for NetBSD 79eb9477 ci: test against go1.20.1; update staticcheck ab1a49c9 db: add per-level ValueBlocksSize gauge to Metrics efd802f1 db: use objstorage to find obsolete objects f01d8eff sstasble: allow duplicate deletes in deleteObsoleteObject 822e1941 db: fix replay corruption in read only mode b2e46077 db: add a ScanInternal to scan internal keys e1e9c2a1 sharedobjcat: store object type 542e01ad sharedobjcat: add creator-id 6adb7237 docs: add benchmark annotation for grandparent boundary splitting ebc640d1 objstorage: add Sync ca1ed91d objstorage: return NotExistError when object is not known ``` Epic: None Release note: None 97655: ui: show events even with max size limit reached r=maryliag a=maryliag Show events that were returned when we reach the max size limit of the sql-api. Part Of #96184 https://www.loom.com/share/31e8c55f091c4e2ba4b4243710aa58a4 Release note: None Co-authored-by: Yahor Yuzefovich <[email protected]> Co-authored-by: Jackson Owens <[email protected]> Co-authored-by: maryliag <[email protected]>
Previously, if the call to retrieve contention/lock information returned a max size limit error, we were not displaying any data on the recent executions. Now we show the data returned with a warning that cluster locks information might be missing. Part Of cockroachdb#96184 Release note (ui change): Still show active execution information when there is a max size limit error.
97486: ui: fix jobs page error state r=xinhaoz a=xinhaoz Previously, the error state for the jobs page was only displayed if there were previously jobs returned. We should show the api error even when we have never received a successful jobs payload (e.g. error on first request). This commit changes error displaying in the jobs page such that we will show the request error regardless of whether or not we have previously received data. If we have previous data when we receive an unsuccessful request response, we will show the error along with the existing data. Epic: none Release note (bug fix): Jobs page properly shows error state when we receive an error during data fetching. 97642: sqlstats: increase default value for deleted rows r=maryliag a=maryliag During the sql stats compaction job, we limit the amount of rows being deleted per transaction. We used a default value of 1024, but we have been increasinly seeing customer needing to increase this value to allow the job to keep up with the large amount of data being flushed. We have been recommening a value for 20k, so being more conservative with the default (plus the changes on #97123 that won't let tables get in a state with so many rows), this commit changes the value to 10k. Fixes #97528 Release note (sql change): Increase the default value of `sql.stats.cleanup.rows_to_delete_per_txn` to 10k, to increase efficiency of the cleanup job for sql statistics. 97662: ui: show recent executions even without lock information r=maryliag a=maryliag Previously, if the call to retrieve contention/lock information returned a max size limit error, we were not displaying any data on the recent executions. Now we show the data returned with a warning that cluster locks information might be missing. Part Of #96184 <img width="1133" alt="Screenshot 2023-02-24 at 6 25 39 PM" src="https://user-images.githubusercontent.com/1017486/221320673-65a8f2eb-e7e3-4f27-b911-4111e8dc6728.png"> Release note (ui change): Still show active execution information when there is a max size limit error. Co-authored-by: Xin Hao Zhang <[email protected]> Co-authored-by: maryliag <[email protected]>
All places using the sql-api was updated on the above PRs to still show data when there is the error message. |
Pages using the sql-over-http api can easily return error for
Error while retrieving insights information: max result size exceeded
.We should make sure these queries return fewer results, or we need to increase the result size in the request.
Jira issue: CRDB-23990
gz#15832
The text was updated successfully, but these errors were encountered: