Make sure we pass UTF-16 code unit offsets in all the LSP types
Xanewok opened this issue · comments
cc #1112
cc microsoft/language-server-protocol#376
This causes problems with displaying correct diagnostic span and code suggestion spans (here).
Currently LSP specifies all the text offset to use the UTF-16 code unit ("Text Documents section in the LSP specification) and so that's what Range
type is expected to pass.
However, RLS uses its own rls_span::Range
(from rls-span crate, used both by the rustc
and rls
), which has text unit offset specified as the unicode scalar values (think Rust char
and chars()
), which we naively transform to Range using rls_to_range
: (bad!)
Line 130 in 816017b
For lines it doesn't matter, but we should only be able to make the UTF-16 code units <> Unicode scalar value offset conversion given a source line that the range operates on.
It might make sense to create a method on the VFS (https://github.com/rust-dev-tools/rls-vfs) to convert between given spans or columns.
See #1112 and rust-dev-tools/rls-vfs#24 for related changes
Maybe we should ignore this problem, and wait (or make ?) M$ to change that to utf-8?
If enough client/servers disregard the spec and unify on a sane alternative (byte or codepoint count), VSCode and the spec will eventually adapt. I suspect most tools use byte or codepoint counts until an issue gets opened due to a strange interaction with another lsp tool, at which point somebody reads the spec, re-reads it again, and goes through the various stages of grief...
Microsoft has control of the spec, but we, as tools writers, have no obligation to follow it to the letter, provided we unify on alternative behaviours and make it known.