Not really - it gets very messy because every time you transform a future you have to figure out where you're getting the thread for that transformation to run on.
sorry, what transformation? A future is simply a placeholder for something being computed asynchronously. On a threadful design you would simply spawn a thread (or pick one from a thread pool) to handle the computation. Normally your future runtime would handle it for you.
Basically you end up with something similar to the fork-join model.
Whenever you want to transform a result that's in a future, e.g. you have a future for a number and want to add 2 to it.
> On a threadful design you would simply spawn a thread (or pick one from a thread pool) to handle the computation.
If you allow yourself to spawn threads everywhere you'll quickly run out of resources. So you have to manage which thread pool you're using where and ensure you're not bringing in priority inversions etc.. It's really not that easy.
> Basically you end up with something similar to the fork-join model.
The fork-join model isn't really a purely thread-based model - the work-stealing technique is pretty much trying to reimplement what async-style code would do naturally.
> ensure you're not bringing in priority inversions etc..
Remember we're comparing to async/futures, which are not really guaranteed to not starve either. At least with thread pools you can, in theory, manage this well.
> Remember we're comparing to async/futures, which are not really guaranteed to not starve either. At least with thread pools you can, in theory, manage this well.
With async/futures you're giving the runtime control over these decisions, whereas with threads you're managing them yourself, which can be an advantage but only if you don't make errors with that manual control. An async/future runtime can know which tasks are waiting for which other tasks, letting it avoid deadlocks and a lot of possible priority inversions, and the async style naturally lends itself to writing code that's logically end-to-end (on a single "fiber" even as that fiber moves between threads), which means there's less need to balance resources across multiple thread pools.
That's almost certainly a bad idea in a web server context like this article is talking about. You improve best-case latency when the server's not loaded, but now you're using 3 threads per request to get a less than 2x speedup (and in a bigger example it would be worse), so your scaling behaviour will get worse.
always spawning a thread is of course the naive implementation. You can put an upper bound on the number of threads and fallback to synchronous execution of async operations in the worst case (for example inside the wait call).
If your threads are a bit more than dumb os threads (say, an hybrid M:N scheduler) you can do smarter scheduling, including work stealing of course.
Well, as your threads become less like threads and more like a future/async runtime you come closer to the advantages and disadvantages of a future/async runtime, yes.
The underlying thread model have always been 'async' in some form under the hood, i.e. at some point there is always a multiplexer/scheduler that schedules continuations. Normally this is inside the kernel, but M:N or purely superspace based thread models have been used for decades.
Really the only difference between the modern async model and other 'threaded' model is its 'stacklessness' nature. This is both a major problem (due to the green/red function issue and not being able to abstract away asynchronicity) and an advantage (due to the guaranteed fixed stack size, and, IMHO overrated, ability to identify yield points).
At the end of the day is always continuations all the way down.