Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

In the mainstream languages, synchronisation seems to be a means (locking code) to an end (locking data).

The programmer usually does not care that only one thread can enter a method at a time - that's just the 'how'. The 'why' is that the programmer wants to reason about reads and writes in a multithreaded environment as easily as one would do in a single-threaded environment.

Compare lock management to memory management. Method synchronisation is analogous to using stack variables - you don't have to do the work manually, but you can't work across method boundaries.

Manually locking/unlocking a resource is analogous to malloc and free. It's prone to error, unless you adopt draconian rules preventing flexibility.

What's the GC of lock management?



Channel-based programming, perhaps? From Go style to Erlang shared-nothing, those approaches offer some of the properties you’re looking for.

Or maybe the holy grail hasn’t been created yet in this category: a compiler that can transform arbitrary computations or expressions of computational goals into their maximally parallel form, where locks etc. are compiler output artifacts a la assembly instructions rather than things programmers regularly interact with. Some academic languages and frameworks have made strides in this direction, but I don’t know of any that have caught on.


> Or maybe the holy grail hasn’t been created yet in this category: a compiler that can transform arbitrary computations or expressions of computational goals into their maximally parallel form…

I am not an expert at concurrency, so forgive my ignorance. If such a compiler existed, wouldn’t its purpose be defeated by external code? As in, someone provides a library whose concurrency properties are unknown.


What do you think of this https://github.com/HigherOrderCO/Bend ?


In the context of C#, most code does not actively share mutable state even if it's vastly concurrent and/or parallel via Tasks or explicit threading.

In case the state is still shared, most common scenarios are usually addressed by applying concurrent data structures (ConcurrentDictionary/Stack/Queue) or using a barrier/semaphoreslim.

In case where more complex mutable state must be shared in a non-thread-safe way, it is nowadays easily addressed by using Channel<T> where a single reader owns the instance of a particular class and handles concurrently submitted requests in a serialized way.

Otherwise, I agree that correctly writing non-blocking code or code with granular locking strategy is extremely non-trivial, and the state space that must be handled for even most trivial cases is enormous and makes my head hurt.

Some scenarios can be addressed by https://github.com/microsoft/coyote which simplifies the task, but it is still challenging.

Other than above, there exists an F# implementation of Concurrent ML that solves the problem in a CSP-style fashion, similar to Channel<T> above: https://github.com/Hopac/Hopac/blob/master/Docs/Programming....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: