Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pageserver: add page_trace API for debugging #10293

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft

Conversation

jcsp
Copy link
Collaborator

@jcsp jcsp commented Jan 7, 2025

Problem

When a pageserver is receiving high rates of requests, we don't have a good way to efficiently discover what the client's access pattern is.

Closes: #10275

Summary of changes

  • Add /v1/tenant/x/timeline/y/page_trace?size_limit_bytes=...&time_limit_secs=... API, which returns a binary buffer. Tool to decode and report on the output will follow separately

Copy link
Contributor

@erikgrinaker erikgrinaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

.await?;

let (page_trace, mut trace_rx) = PageTrace::new(event_limit);
timeline.page_trace.store(Arc::new(Some(page_trace)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this error if there's already a trace in progress?

Comment on lines +1570 to +1571
// Above code is infallible, so we guarantee to switch the trace off when done
timeline.page_trace.store(Arc::new(None));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we could also stream to the client, and cancel if the client goes away.

pub(crate) fn new(
size_limit: u64,
) -> (Self, tokio::sync::mpsc::UnboundedReceiver<PageTraceEvent>) {
let (trace_tx, trace_rx) = tokio::sync::mpsc::unbounded_channel();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we could also use a buffered channel with the max size here, to avoid the size accounting.

Copy link

github-actions bot commented Jan 7, 2025

7260 tests run: 6894 passed, 0 failed, 366 skipped (full report)


Flaky tests (2)

Postgres 17

Postgres 15

Code coverage* (full report)

  • functions: 31.1% (8412 of 27041 functions)
  • lines: 47.8% (66791 of 139734 lines)

* collected from Rust tests only


The comment gets automatically updated with the latest test results
2b8b0f7 at 2025-01-09T12:01:20.878Z :recycle:

Copy link
Contributor

@problame problame left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat!

I think this is safe to deploy, barring the check_permission problem.

Nits can be addressed in a follow-up.

pageserver/src/http/routes.rs Show resolved Hide resolved

let size_limit =
parse_query_param::<_, u64>(&request, "size_limit_bytes")?.unwrap_or(1024 * 1024);
let time_limit_secs = parse_query_param::<_, u64>(&request, "time_limit_secs")?.unwrap_or(5);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Why not parse a humantime::Duration?

Comment on lines 1552 to 1568
loop {
let timeout = deadline.saturating_duration_since(Instant::now());
tokio::select! {
event = trace_rx.recv() => {
buffer.extend(bincode::serialize(&event).unwrap());

if buffer.len() >= size_limit as usize {
// Size threshold reached
break;
}
}
_ = tokio::time::sleep(timeout) => {
// Time threshold reached
break;
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: instead of doing a repeat select!(), I think it's better style to declare one async block that does the loop { trace_rx.recv().await; } , then poll that block inside a timeout.
Roughly like so:

tokio::time::timeout(time_limit_secs, async {
    loop {
        let event = trace_rx.recv().await;
        ...
    }
}).await;

Comment on lines 1555 to 1556
event = trace_rx.recv() => {
buffer.extend(bincode::serialize(&event).unwrap());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I first thought event is always Ok() but it isn't if this handler is called concurrently on the same timeline.

We should

  1. be only writing the Ok() value to the buffer and
  2. bail out of the loop as soon as recv() fails

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to busyloop if the timeline is dropped, but seems fine to deploy temporarily for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Access pattern observation in keyspace ("pagetrace")
3 participants