Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[protocol] Add range information to DocumentDiagnosticParams #1610

Open
ejgallego opened this issue Dec 12, 2022 · 2 comments
Open

[protocol] Add range information to DocumentDiagnosticParams #1610

ejgallego opened this issue Dec 12, 2022 · 2 comments
Labels
diagnostics feature-request Request for new features or functionality
Milestone

Comments

@ejgallego
Copy link

ejgallego commented Dec 12, 2022

As discussed in #737 (comment) and #718 (comment) , for some languages computing diagnostics can be quite expensive.

In such cases, it'd be very convenient for the Pull Diagnostics request to provide a range parameter, so the server can opt to compute diagnostics for the relevant part of the file that is in view of the user.

That is only meant to be a hint, and servers should be free to ignore it.

I understand that this change together with the current support for Pull Diagnostics would be enough to implement a reasonable "check on scroll" functionality which a few IDEs in the context of theorem proving support.

c.f. #1414

@dbaeumer
Copy link
Member

dbaeumer commented Jul 8, 2024

Thinking about this again I am not so sure anymore if it is a good idea. IMO something like this makes only sense if the client supports streaming, were the range is a hint and the server would first compute the diagnostics for that range and then for the rest of the document.

Otherwise the client would need to re-fetch diagnostics on scroll.

@dbaeumer dbaeumer modified the milestones: 3.18, On Deck Jul 8, 2024
@ejgallego
Copy link
Author

Hi @dbaeumer , thanks very much for the feedback, and I can indeed see your point.

This would only work well indeed with some incremental method as you note (I'd love to have streaming support for Pull Diags by the way).

Let me try to summarize my findings about the situation for theorem provers using LSP, I'd dare to say that what I'm going to write below does apply to Coq, Lean, and Isabelle, but I can only really vouch for Coq as of today.

Problem

Interactive theorem provers (ITP) often work with long text files, that take a large amount of time to check.

Thus, it is a bad user experience to provide users with diagnostics only when the whole file has finished checking; ITPs usually implement incremental file checking, thus users can get faster feedback. This incremental checking works often by delaying / skipping checking.

For example, an ITP may choose to check first what's on scope in the editor, and put at a lower priority (idle) checking of the rest of proofs.

What does coq-lsp do

coq-lsp does several things to alleviate this:

  • it sends diagnostics as proofs are completed: this is heavy (and in principle O(n^2)) , but works well in practice. That means latency is low: the user will see an error as soon as the corresponding source part was processed. It can get very heavy on long files with lots of warnings tho, but for now, all users enable it thanks to the low latency and relatively low number of diagnostics. coq-lsp will also limit the number of errors in this mode to try to avoid collapsing code.
  • regarding checking, coq-lsp allows for two modes:
    • a continuous mode, which is CPU heavy, but often works pretty well in practice thanks to smart server-side caching
    • an on-demand mode: in this mode, the client send a (debounced) notification each time the client scroll. Then, coq-lsp will schedule checking to provide minimal latency, and will delay the rest. Diagnostics are still sent with the usual notification.

So indeed, the current solution doesn't work too bad, but relies as you noted in "simulating" streaming via abuse of the diags notification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
diagnostics feature-request Request for new features or functionality
Projects
None yet
Development

No branches or pull requests

2 participants