For anybody interested in more examples, you can take a look at the Prisma VS Code extension. It was originally written 100% in TypeScript, but as most of our tooling is in Rust, we have started to move pieces over to Rust via wasm: https://www.prisma.io/blog/vscode-extension-prisma-rust-weba...
I have written several VSCode extensions using Rust so this was quite interesting.
> Build platform-specific binaries from the Rust crate and ... do some host identification magic .... This was also pretty easy to rule out because [of] ... the complexities of multi-platform builds.
This is how I currently do it, but I agree it is not ideal. Wasm is a nicer solution.
> Both tower-lsp and lspower require implementing the language server as an asynchronous service powered by Tower, a framework for async Rust. We didn't think the benefits of asynchrony would be worth the added complexity to the language server.
I definitely agree. Async is way over-used in Rust land. That said, it wasn't that hard to use tower-lsp. It's pretty well designed.
> we initialize the Wasm-ified polar-language-server and manage the server side of the LSP connection from TypeScript.
Does this mean that it's more difficult to use the LSP from anything except VSCode?
I think the solution I would go for is probably to compile my language server with Wasmtime, bundle Wasmtime for every platform I support and then just run it through that. Then I don't have to cross-compile my Rust code, but it's still a fully featured language server on its own.
> Does this mean that it's more difficult to use the LSP from anything except VSCode?
Could be... we shall see when we try porting it to a second IDE. :-D
I was so focused on VS Code that I haven't spent much time considering the set of changes that'll be required to support a second client.
> I think the solution I would go for is probably to compile my language server with Wasmtime, bundle Wasmtime for every platform I support and then just run it through that. Then I don't have to cross-compile my Rust code, but it's still a fully featured language server on its own.
Bundling in wasmtime is a really interesting idea. Thanks for sharing!
This article is super insightful, I believe more and more extensions will be taking this approach.
At Wasmer we are planning to do the something similar for the VS Code WASM extension [1]. Right now it uses wabt.js (wabt compiled to Wasm + js bindings with Emscripten), but there are some more interesting ways to approach automatic bindings via Interface Types... here are some prototypes! [2]
> We didn't think the benefits of asynchrony would be worth the added complexity to the language server. (For what it's worth, the folks behind Rust Analyzer, the most mature LSP implementation in Rust, seem to agree.)
That is interesting. Is it that much complexity when using async Rust?
It's not too bad, but a language server works entirely with file IO and local network requests which are all quick anyway so I doubt they'd see much benefit. File IO is actually slower when done asynchronously unless you use io-uring which hardly anything support yet.
The “initial scan” scenario also happens after git operations like pull, checkout etc. When working in a team where you pull and push often, it’s a real quality of life improvement if the IDE adapts quickly, so I wouldn’t dismiss that completely.
Async is also okay (not perfect) for CPU parallelism, with tokio::spawn etc., which I think is important for fast incremental compilation once everything is in RAM. To pick an example, let’s say you change the type of a widely used function, now you have to typecheck the entire project again.
If I were to design a language server, my hope would be to achieve both good IO interleaving and good CPU parallelism with a single async based design.
>But a LSP mostly deals with a single request at a time by a single client.
Not really —- there’s bowl of spaghetti worth of concurrency in LSP. All of the following things can happen at the same time, and they use the same state:
* several read-only requests from the client (compute completion *and* highlighting)
* write request from the client (the user typed a comment in Cargo.toml)
* write request from file system (user switched branches can Cargo.lock is now different)
* background project update (cargo metadata finished with a new project model)
* background compiler update (cargo check emitted a new diagnostic)
* updates on background “indexing” which were kicked in quiescent state
That’s significantly more logical concurrency than in a typical HTTP server, where each request is conceptually independent from every other and all the hard synchronization bits lie in the database.
To manage this complexity, csp-style APIs (async/await and blocking threads) don’t really work, as there’s too much fundamental shared mutable state. What sort-of-works (but still is complex) is an explicit actor model, where you manually code the state machine and explicitly resolve the conflicts of the style “ok, I got results from cargo metadata, so I could spawn cargo check, but I also know that the user changed Cargo.toml in the meantime, so the results of metadata are actually stale and I better re-run that”.
In my experience, for the basics, Rust async is great, it gets messier around more complex things, e.g. async database transaction retries or async-at-depth type flows - also, async traits are not available yet as standard (there is a crate available).
For procedural code, async is fine. I still hate writing anything resembling a higher-order function (a function that takes or returns a function) in async Rust. The type constraints become longer than the code, and I can never seem to remember what I need to pin, unpin or box.
IMO it is, and usually the benefits of async Rust aren't that great for simple projects. To me the biggest annoyances is fractured nature of async libraries, you have to choose runtime and then the libraries. Tokio or async-std? They are working to fix it, but until some sort of "official" standard library for async exists it remains fractured...
Secondly adding some parallelization in normal "vanilla" Rust with std seems rather easy with channels and Rayon, so I'm not looking to retry async rust if I don't have to.
In practice there's no fragmentation problem: you use tokio, and never think about async-std again (despite the name, async-std is not standard/official in any way). Everyone supports tokio, and there isn't anything unique to async-std that doesn't have a tokio alternative.
This state totally sucks for the authors of async-std who have an uphill battle getting traction in the tokio-dominated ecosystem, but from user perspective you can pretend the problem doesn't exist. Treat tokio as the standard library, as if it was hardcoded and you had no choice on the matter.
The fact that the Rust project resists standardizing too early — which could lead to being stuck with a bad design — is in my opinion one of its virtues.
But I agree that we’re past that point at least for simple async worker threads and simple IO. I don’t follow the progress that closely, but I think work is happening towards standardization there.
> The fact that the Rust project resists standardizing too early — which could lead to being stuck with a bad design — is in my opinion one of its virtues.
I think the same way, and was enthusiastic of async Rust at the beginning, but right now I think it has caused a mess in libraries. Maybe they did the standardization way too early? Maybe they could have waited until the async libraries can be separated from runtimes, which I think they are now planning to do.
If you start to dig through otherwise useful libraries you see a divide to those who async and those who don't, and top of that which runtime they have chosen. Mixing and matching between those is painful.
They are mentioning that LSP will open a door to port the functionality to IntelliJ. There's no native support for LSP in IntelliJ as far as I am aware. Wondering how did they plan the port to IntelliJ when the time comes?